TOPIM TECH Programme

Prgramme Elements

Main topics covered

  • Machine Learning
  • Deep Learning
  • Data Analysis of Complex Data
  • Generation across Scales & Modalities
  • Open Data (Management and Protection)
  • Standardization

Programme Elements

  • Introductory Lectures
  • Invited Lectures
  • Oral presentations from your abstract submissions
  • Interactive poster session
  • 5 hours practical course- see below

Plus networking opportunities to enhance discussions and knowledge exchange

  • Scientific Speed-Dating
  • Group Excursion - see right hand side
  • Common Meals


final PROGRAMME now available

overview of POSTER presentations

Practical Work

5 full hours of practical work are scheduled. To provide each of you with a session of interest two tracks in parallel are offered:



Please sign in during your registration - no additional fee is charged!

Description of Practical Course


TUTORS: Jonathan Disselhorst & Prateek Kartyiar, Tübingen

Machine learning is an indispensable part in the analysis of data from many sources. But the plethora of techniques and algorithms may seem overwhelming at first.

This course will provide a short introduction into topics such as unsupervised learning (e.g., clustering), supervised learning (e.g., neural networks), dimensionality reduction, and visualization of data, specifically with imaging data in mind. It will provide a first steps for those interested in applying machine learning to their data.

All materials, software, code and data will be supplied on site

Target audience:

Anyone with limited (or no) experience in machine learning, but with a clear interest in learning more about these techniques. 

Topics covered:

  • Data visualization
  • Dimensionality reduction
  • Unsupervised learning
  • Supervised learning
  • Neural networks


  • Learn about different machine learning techniques
  • Learn about practical aspects of applying these techniques


Tutor: Petr Walczysko, Dundee

The Open Microscopy Environment (OME) is an open-source software project that develops tools that enable access, analysis, visualization, sharing and publication of biological image data.
At present, we support more than 150 image data formats across many imaging modalities including medical imaging, high-content screening and whole-slide imaging.
Supported formats include DICOM,, Bio-rad SCN and cellSens VSI. We support and visualize multi-channel 3D timelapse images ranging from whole organism CT scans to microscopy images.
In this one day course, we will present the OMERO platform, and show how to import, organise, view, search, annotate and publish imaging data. Additionally, we will briefly introduce how to use a variety of processing tools with OMERO.

Target audience

  • Anyone wanting to use OMERO to organize, view, annotate and publish imaging data
  • Facility Managers wanting to train users
  • Life scientists with programming skills, bioinformaticians and image analysts.
  • Anybody interested in using Jupyter and OMERO
  • Graduate students, Postdocs, Group Leaders and Staff members

During this course you will learn about:

  • Getting started with OMERO
  • Importing and organizing your data
  • Viewing your data
  • Annotating and searching your data
  • Creating figures for publication and sharing
  • Using OMERO with image processing tools: ImageJ etc.
  • Managing and sharing data
  • Retrieving image data and metadata using an API
  • Integrating 3rd party tools to process your data in OMERO
  • Saving generated analytical results back to OMERO
  • Managing and sharing data at scale
  • Creating programmatically figures ready for publication/presentation

Also, we will show how to transition from manual data processing to automated processing workflows. We will introduce how to write applications against the OMERO API, how to integrate a variety of processing tools with OMERO and how to automatically generate output ready for publication.

This course will be delivered by members of the OMERO team. The OME project is supported by BBSRC and Wellcome Trust.

Further information also at