Tiresias
Research

News · March 2026

Tiresias Joins the NVIDIA Inception Program

We're pleased to announce that Tiresias has been accepted into the NVIDIA Inception Program — NVIDIA's global accelerator for startups working at the frontier of AI and deep learning. For us, this is more than a badge. It's a partnership with the company whose hardware underlies most of the meaningful AI research happening in the world today, and a commitment to the approach we've staked the company on: building world models for human taste.

What we're building

Most recommendation systems solve a retrieval problem: find content similar to what a user has engaged with before. Tiresias solves a different problem. We ask: given only who a person is — their personality, their values, the cognitive fingerprint they carry through life — what will they love?

This is the project of building a world model for human taste: a model that understands not just patterns in historical click data, but the underlying structure of human preference itself. It's harder than collaborative filtering. It requires a genuine theory of personality grounded in psychometric science. And it requires serious compute.

A convergence the field is beginning to recognise

The broader AI research community is arriving at a similar conviction through a different door. The most exciting recent work in cognitive AI isn't about scaling what people watch or buy — it's about modelling how people think. Researchers are beginning to demonstrate that foundation models fine-tuned on behavioural and psychological data can predict individual human decisions with a fidelity that hand-crafted models cannot approach. The implication is significant: personality is learnable at scale, and a sufficiently rich model of personality generalises across domains.

Taste is a decision. Personality predicts decisions. That chain of reasoning is what Tiresias is built on — and it's increasingly where the science points.

Where NVIDIA's stack comes in

Building world models for human taste is not a CPU problem. Psychometric inference at the point of recommendation — running personality scoring, embedding, and ranking in real time, for every request — demands GPU infrastructure designed for this kind of workload. NVIDIA Inception connects us to exactly that.

Across our roadmap, we expect to draw on the full depth of NVIDIA's accelerated computing platform:

  • RAPIDSGPU-accelerated data science — cuDF for feature engineering on behavioural datasets, cuML for personality clustering and dimensionality reduction at scale. What takes hours on CPU takes minutes on GPU.
  • CUDA-accelerated model trainingFine-tuning large language models on psychometric and behavioural data requires serious compute. CUDA and cuDNN give us the throughput to train and iterate at a pace that's otherwise impossible.
  • Triton Inference ServerServing multiple embedding models and personality classifiers simultaneously, with intelligent batching and sub-millisecond latency. Triton is how we get from a trained model to a production API.
  • TensorRTQuantizing and optimising our inference engines for deployment — reducing model size and latency without sacrificing predictive accuracy, which matters when personality scoring has to happen in the critical path of a recommendation request.
  • NVIDIA NIMPre-optimised embedding microservices with standard APIs. As we expand our latent space of content and personality representations, NIM accelerates our path to production without rebuilding serving infrastructure from scratch.
  • NVIDIA MerlinAn end-to-end GPU-accelerated framework for recommendation systems — from NVTabular feature engineering to HugeCTR training to Triton serving. As Tiresias scales to games, books, and music, Merlin provides the infrastructure backbone.

What comes next

Tiresias is live today with film and TV recommendations at t-me.ai. Games, anime & manga, books, and music are on the roadmap. As we build out those verticals, we'll be publishing the research behind them — the psychometric models, the training approaches, and the evaluation frameworks we're developing.

NVIDIA Inception brings us closer to the compute, the expertise, and the community we need to do this properly. We're grateful for their recognition, and we intend to justify it.

If you're a researcher, engineer, or partner working on adjacent problems — in psychometrics, recommender systems, or GPU-accelerated ML — we'd love to hear from you.