About Apheris
Apheris powers federated life sciences data networks, addressing the critical challenge of
accessing proprietary data locked in silos due to IP and privacy concerns. Publicly available
datasets are insufficient to train high-quality ML models that meet industry requirements.
Our product addresses this by enabling life sciences organizations to collaboratively train
higher quality models on complementary data from multiple parties. We are now doubling
down on two key areas of interest: structural biology and ADMET.
About the role
At Apheris, we power federated data networks in life sciences to address the data bottleneck in training highly performant machine learning models. Publicly available molecular datasets are insufficient to train models that meet real industry requirements. Our product enables biopharma organizations to collaboratively train higher-quality models on their combined data, while ensuring that data ownership, IP, and governance remain with the
original custodians. Our federated computing infrastructure is designed with privacy and control as first-class concerns.
As we double down on structural biology and ADMET as core areas within our drug discovery work, we are looking for a privacy-focused Senior ML Engineer to take technical ownership of privacy risk assessment & mitigations within our federated modelling initiatives. This is a hands-on, high-impact role centred on understanding how real drug discovery models behave in practice, identifying where privacy risks emerge, and generating empirical evidence to assess and mitigate those risks.
You will work within our AI Applications Engineering team and act as a technical authority on privacy for machine learning in drug discovery. A key part of the role is mapping real-world model usage to concrete threat models and experimental designs and clearly communicating the resulting evidence and conclusions to external partners, consortium stakeholders, and internal leadership.
You should bring strong hands-on experience with machine learning models used in drug discovery—particularly structure-based and protein–ligand modelling, with exposure to adjacent areas such as ADMET. You should be comfortable working directly with modelling pipelines, uncertainty estimation, and model outputs to reason about privacy risk, rather than treating privacy as a theoretical or policy-driven concern.
If you want to be part of a mission-driven team building federated AI systems for life sciences, and you are motivated by turning complex modelling behaviour into clear, defensible privacy conclusions for high-stakes collaborations, this role is for you.
Logistics
Our interview process is split into three phases:
- Initial Screening: If your application matches our requirements, we invite you to an initial video call to explore the fit. In this 30-minute interview, you will get to know us and the role. The interviewer will be interested in your relevant experiences and skills, as well as answer any question on the company and the role itself that you may have.
- Take-home exercise: You will be asked to undertake a short, offline coding assessment and prepare a case-study to be presented in the next stage.
- Presentation & Deep Dive: In this phase, we will ask you to give a short presentation on your case-study topic. Domain experts from our team will assess your skills and knowledge required for the role by asking you about meaningful experiences or your solutions for specific scenarios in line with the role we are staffing.
- Final Interview: Finally, we invite you for to meet with our founders, talking about our culture and meeting future co-workers on the ground.