Authors
Eli Ben-Sasson, Eden Saig
Publication date
2018
Conference
10th Innovations in Theoretical Computer Science Conference (ITCS 2019)
Publisher
Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik
Description
This paper studies families of distributions T that are amenable to retentive learning, meaning that an expert can retain users that seek to predict their future, assuming user attributes are sampled from T and exposed gradually over time. Limited attention span is the main problem experts face in our model. We make two contributions. First, we formally define the notions of retentively learnable distributions and properties. Along the way, we define a retention complexity measure of distributions and a natural class of retentive scoring rules that model the way users evaluate experts they interact with. These rules are shown to be tightly connected to truth-eliciting" proper scoring rules" studied in Decision Theory since the 1950's [McCarthy, PNAS 1956]. Second, we take a first step towards relating retention complexity to other measures of significance in computational complexity. In particular, we show that linear properties (over the binary field) are retentively learnable, whereas random Low Density Parity Check (LDPC) codes have, with high probability, maximal retention complexity. Intriguingly, these results resemble known results from the field of property testing and suggest that deeper connections between retentive distributions and locally testable properties may exist.
Scholar articles
E Ben-Sasson, E Saig - 10th Innovations in Theoretical Computer Science …, 2018