top of page

BioImage Archive democratises access to microscopy data

EMBL-EBI’s BioImage Archive makes it easier to access and analyse biological research data

Credit: BioImage Archive/EMBL-EBI

The world’s leading public database of biological data for all types of imaging, the BioImage Archive, has introduced a new feature, which enables users to explore images without downloading them. This helps researchers see what a dataset looks like at a glance.

Users can explore the new feature by clicking on the Galleries tab of the BioImage Archive website.


The BioImage Archive is EMBL-EBI’s free, comprehensive and publicly available online resource which stores and distributes biological images. Researchers can submit data from any imaging technology, as long as the data are associated with a publication or have value beyond a single experiment. Data in the BioImage Archive are free to access and download, explore and reanalyse.


The BioImage Archive launched in 2019 and has been growing significantly year on year. In 2023, the BioImage Archive surpassed 100 terabytes of data stored, with tens of thousands of visits each year.


Introducing visual galleries

“Many of the datasets available through the BioImage Archive are very large and it can be difficult to see at a glance what they contain,” said Aybuke Kupcu Yoldas, Bioinformatician at EMBL-EBI. “To make it easier for users to understand what they are looking at, we introduced a new ‘galleries’ feature. This enables them to view images in their web browser, without downloading them, using a range of viewers. Each image has a unique URL that scientists can use to analyse it using their own tools. We hope this new feature will come in handy, especially for users who do not have much storage space or a robust internet connection.”


There are currently three visual galleries available in the BioImage Archive containing some of the more commonly-used datasets. The team will continue adding more data to these galleries.

  • AI gallery – datasets that have been developed and annotated to train and test artificial intelligence (AI) and machine learning tools. Users can see a clear distinction between image and AI annotation.

  • Visual gallery – a selection of striking images from studies on a range of model species.

  • Volume EM gallery – examples from different volume electron microscopy (EM) techniques from EMBL-EBI’s EMPIAR public archive.

Users can also see the metadata next to the images, which gives them additional context about the dataset including the species captured, image size, channels, number of time points, and more.


Images can be open in a number of viewers, some of which allow users to do annotation on the fly, further speeding up and customising their image exploration.


Can’t get far without data standards

The microscopy landscape is very fragmented, with data being produced in over 50 file formats. This is due to the wide range of imaging technologies available, and the fact that many microscope manufacturers use their own, proprietary software. A major challenge is to compare data across formats.


To address this obstacle and enable the creation of its brand new galleries, the BioImage Archive team has converted different formats into one community-driven, open source, cloud-optimised file format called OME-Zarr. The use of one format has been essential for implementing the galleries feature in the BioImage Archive. This highlights the importance of data standards for making data FAIRer (Findable, Accessible, Interoperable, Reusable) by making it easier to access, compare and analyse.


Fertile training ground for AI

“There is a big push to use AI and machine learning to extract knowledge from microscopy data,” explained Matthew Hartley, BioImage Archive Team Leader at EMBL-EBI. “For this to happen, researchers need well-curated and annotated datasets presented in a consistent format, and ideally available for analysis in the cloud. This is exactly what our AI collection offers. It was designed especially for developing, training and testing AI models.”




Commenti


bottom of page