Syntactic Arrow of Time
ResearchForward-trained LLMs outperform backward ones. Is the asymmetry from grammar or meaning? Tested by training on POS tags only.
PythonPyTorch
LLM tutors are short-term and planless. This one tracks what you know and guides you with adaptive learning plans. This is my thesis project: giving users control over their learning model.
Forward-trained LLMs outperform backward ones. Is the asymmetry from grammar or meaning? Tested by training on POS tags only.
Wanted to understand how sampling strategies affect generation quality. Built a benchmark to find out.
News is noisy and biased. This generates a neutral daily summary in three languages.