Florian Hartmann
I work on collaborative learning with LLMs at Google DeepMind. Previously, I wrote my master's thesis on federated learning, spent a few wonderful months at Mozilla working on a Firefox implementation, and then continued to work on federated learning at Google Research for a few years. I also did some work on Differential Privacy and recommender systems at Mozilla, and on NLP at Amazon.
I occasionally like to develop apps, such as a clipboard manager or a Hacker News client. Before I got into machine learning, I published several popular JavaScript libraries.
2024
-
Social Learning
Towards collaborative learning with large language models
-
LLMs Understand Base64
Learning is compression
2023
-
What I read in 2023
Some notes on the books I read the past year
-
Distributed Differential Privacy for Federated Learning
Distributed training with formal privacy guarantees that hold end-to-end
2022
-
What I read in 2022
Some notes on the 52 books I read the last 52 weeks
-
Working on Federated Learning
Why working on federated learning is interesting, meaningful and fun
2021
-
What I read in 2021
Some notes on the books I read this year
-
Federated Smart Text Selection
What I worked on at Google the past couple of years
2020
-
What I read in 2020
Some notes on the books I read this year
-
Diffing
Using the longest common subsequence to compute diffs
-
That XOR Trick
Solving problems creatively with XOR
2019
-
What I read in 2019
Some notes on the books I read this year
-
Reservoir Sampling
Sampling from streams
-
Count-Min Sketch
A probabilistic data structure for data stream summaries
2018
-
What I read in 2018
Some notes on the books and papers I read this year
-
TensorFlow
A bottom-up guide to computational graphs and tensors
-
Quines
Self-reproducing programs
-
Federated Learning for Firefox
Distributed machine learning for the Firefox URL bar
-
Estimation Theory and Machine Learning
Formalizing what it means to compute good estimates
-
Federated Learning
An introduction to collaborative machine learning
-
RProp
Gradient descent without using gradient magnitudes
-
Probabilistic Quantization
A probabilistic compression technique for Federated Learning