1. FAIR Evaluator tool

Recipe Overview
Reading Time
30 minutes
Executable Code
No
Difficulty
Assessing with FAIR Evaluator
FAIRPlus logo
Recipe Type
Hands-on
Maturity Level & Indicator
not applicable
hover me Tooltip text

1.1. Ingredients

Ingredient

Type

Comment

HTTP1.1 protocol

data communication protocol

guidance on persistent resolvable identifiers

policy

Persistent Uniform Resource Locators - PURL

redirection service

Archival Resource Key

identifier minting service; identifier resolution service

Handle system

identifier minting service; identifier resolution service

DOI

identifier minting service

based on Handle system

identifiers.org

identifier resolution service

EZID resolution service

identifier resolution service

name2things rsolution service

identifier resolution service

FAIREvaluator

FAIR assessment

FAIRShake

FAIR assessment

RDF/Linked Data

model

Actions.Objectives.Tasks

Input

Output

1.2. Objectives

  • Perform an automatic assessment of a dataset against the FAIR principles 1 expressed as nanopublications using the FAIREvaluator 2.

  • Obtain human and machine-readable reports highlighting strengths and weaknesses with respect to FAIR.

1.3. Step by Step Process

1.3.1. Loading FAIREvaluator web application

Navigate the FAIREvaluator tool, which can be accessed via the following 2 addresses:

1.3.2. Understanding the FAIR indicators

In order the run the FAIREvaluator, it is important to understand to notion of FAIR indicators (formerly referred to as FAIR metrics). One may browse the list of currently community defined indicators from the Collections page

1.3.3. Preparing the input information

To run an evaluation, the FAIREvaluator needs to following 5 inputs from users:

  1. a collection of FAIR indicators, selected from the list described above.

  2. a globally unique, persistent, resolvable identifier for the resource to be evaluated.

  3. a title for the evaluation. Enforce a naming convention to make future searches easier as these evaluations are saved.

  4. a person identifier in the form of an ORCID.

1.3.4. Running the FAIREvaluator

Hit the ‘Run Evaluation’ button from ‘https://fairsharing.github.io/FAIR-Evaluator-FrontEnd/#!/collections/new/evaluate’ page

1.3.5. Analysing the FAIREvaluator report

Following execution of the FAIREvaluator, a detail report is generated.

Time to dig into the details and figure out the reasons why some indicators are reporting a failure:

1.4. Conclusion

Using software tools to assess FAIR maturity constitutes an essential activity to ensure processes and capabilities actually deliver and claims can be checked. Furthermore, only automation is able to cope with the scale and volumes of assets to evaluate. The software-based evaluations are repeatable, reproducible and free of bias (other than those that may be related to definitions of the FAIR indicators themselves). These are also more demanding in terms of technical implementation and knowledge. Services such as the FAIRevaluator are essential to gauge improvements of data management services and for helping developers build FAIR services and data.

1.5. Reference

1.6. Authors