Start Date

5-12-2025 12:00 PM

End Date

5-12-2025 1:00 PM

Description

The ultimate goal of machine learning is to build models that generalize well. different machine learning frameworks have different ways of achieving this goal. I-Con builds outputs that group like and separate unlike data points, building strong groupings at the cost of robustness against noisy/mislabeled data. The Free Energy Principle (FEP) builds outputs within a larger probability density, being able to account for noisy/mislabeled data at the cost of structure. We posit that I-Con and FEP can be formally related as Minimum Description Length (MDL) codelengths and as limit cases of Alpha-divergences. Rigorously proving this connection would provide new perspectives on I-Con and FEP that could potentially lead to new ways to overcome their weaknesses with a general form.

Share

COinS
 
Dec 5th, 12:00 PM Dec 5th, 1:00 PM

The “Surprisal” Between Contrastive Representation Learning and the Free Energy Principle

The ultimate goal of machine learning is to build models that generalize well. different machine learning frameworks have different ways of achieving this goal. I-Con builds outputs that group like and separate unlike data points, building strong groupings at the cost of robustness against noisy/mislabeled data. The Free Energy Principle (FEP) builds outputs within a larger probability density, being able to account for noisy/mislabeled data at the cost of structure. We posit that I-Con and FEP can be formally related as Minimum Description Length (MDL) codelengths and as limit cases of Alpha-divergences. Rigorously proving this connection would provide new perspectives on I-Con and FEP that could potentially lead to new ways to overcome their weaknesses with a general form.