Questa mi sembra la guida migliore
Assessing the FAIRness of data
che propone questo tool per valutare la FAIRness dei dati
The long road to fairer algorithms, 2020
se si usa il template Horizon2020 è già suddiviso in Findable Accessible ecc.
14 exemplar universal metrics covering each of the FAIR sub-principles.
The metrics request a variety of evidence from the community, some of which may require specific new actions.
For instance, digital resource providers must provide a publicly accessible document(s) that provides machine-readable metadata (FM-F2, FM-F3) and details their plans with respect to identifier management (FM-F1B), metadata longevity (FM-A2), and any additional authorization procedures (FM-A1.2).
They must ensure the public registration of their identifier schemes (FM-F1A), (secure) access protocols (FM-A1.1), knowledge representation languages (FM-I1), licenses (FM-R1.1), provenance specifications (FM-R1.2).
Evidence of ability to find the digital resource in search results (FM-F4), linking to other resources (FM-I3), FAIRness of linked resources (FM-I2), and meeting community standards (FM-R1.3) must also be provided.
Potremmo dire che i nostri dati rispecchiano una metrica FAIR verificando i 14 punti.
Noi potremmo rientrare in quelli evidenziati.
Per FM-A2 dipende da cosa intendono per longevity......
I punti non evidenziati sono da ricercare con descrizioni più approfondite .........
n this work, we examined how the FAIR principles can be applied to scientific workflows. We adapted the FAIR principles to make the PREDICT workflow, a drug repurposing workflow based on machine learning, open and reproducible. Therefore, the main contribution of this paper is the OpenPREDICTcase study, which demonstrates how to make a machine learning workflow FAIR and open. For this, we have created an ontology profile that reuses several semantic models to show how a workflow can be semantically modeled. We published the workflow representation, data, and meta-data in a triple store which was used as FAIR data point. In addition, new competency questions have been defined for FAIRworkflows and how these questions can be answered through SPARQL queries. Among the main lessons learned, we highlight how the main existing workflow modeling approaches can be reused and enhanced by the profile definition. However, reusing these semantic models showed to be a challenging task, once they present reproducibility issues and different conceptualizations, sometimes overlapping in their terminology. A limitation of this work is that it requires a human-intensive effort to apply the ontology profile on existing workflows, where workflow versioning, prospective and retrospective provenance are manually formalized. To overcome this issue we are developing the FAIR workbench, an implementation reference as a Jupyter Notebook plug-in that facilitates the workflow semantic annotation.
pag. 3/23 (questo punto potrebbe essere usato per introdurre la possibilità di ricerca, riuso, condivisione dei dati dell'articolo in quanto depositati presso il repository Open ...............)
The FAIR principles describe a minimal set of requirements for data management and stewardship. By
adhering to the FAIR data principles, the data produced by a solution can be findable, retrievable, possibly
shared and reused and, above all, properly preserved (Wilkinson et al., 2016).
GO FAIR provides a set of technology specifications and implementations for these
components, such as for data repository (FAIR Data Point), and the methodologies for semantic modelling
(FAIRification process) and evaluation (FAIR metrics).
pag. 6/23 questo punto potrebbe essere richiamato per la procedura descritta (collezione di istruzioni)
A Workflow is a collective of Instructions since its parts have the same functional role in the whole,
i.e. each Instruction has the same descriptive functional role for the whole (Workflow).....the Instruction term is commonly used in the plural as an “outline or manual
of technical procedure”