Exponential Learning¶
# Uncomment the following line when running on Google Colab
# !pip install "autora"
The exponential learning experiment has to be initialized with a specific formula and effects.
import numpy as np
from autora.experiment_runner.synthetic.psychology.exp_learning import exp_learning
s = exp_learning()
Check the docstring to get information about the model
help(exp_learning)
Help on SyntheticExperimentCollection in module autora.experiment_runner.synthetic.utilities object: class SyntheticExperimentCollection(builtins.object) | SyntheticExperimentCollection(name: 'Optional[str]' = None, description: 'Optional[str]' = None, params: 'Optional[Dict]' = None, variables: 'Optional[VariableCollection]' = None, domain: 'Optional[Callable]' = None, run: 'Optional[Callable]' = None, ground_truth: 'Optional[Callable]' = None, plotter: 'Optional[Callable[[Optional[_SupportsPredict]], None]]' = None, factory_function: 'Optional[_SyntheticExperimentFactory]' = None) -> None | | Represents a synthetic experiment. | | Attributes: | name: the name of the theory | params: a dictionary with the settable parameters of the model and their respective values | variables: a VariableCollection describing the variables of the model | domain: a function which returns all the available X values for the model | run: a function which takes X values and returns simulated y values **with | statistical noise** | ground_truth: a function which takes X values and returns simulated y values **without any | statistical noise** | plotter: a function which plots the ground truth and, optionally, a model with a | `predict` method (e.g. scikit-learn estimators) | | Methods defined here: | | __delattr__(self, name) | Implement delattr(self, name). | | __eq__(self, other) | Return self==value. | | __hash__(self) | Return hash(self). | | __init__(self, name: 'Optional[str]' = None, description: 'Optional[str]' = None, params: 'Optional[Dict]' = None, variables: 'Optional[VariableCollection]' = None, domain: 'Optional[Callable]' = None, run: 'Optional[Callable]' = None, ground_truth: 'Optional[Callable]' = None, plotter: 'Optional[Callable[[Optional[_SupportsPredict]], None]]' = None, factory_function: 'Optional[_SyntheticExperimentFactory]' = None) -> None | Initialize self. See help(type(self)) for accurate signature. | | __repr__(self) | Return repr(self). | | __setattr__(self, name, value) | Implement setattr(self, name, value). | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) | | ---------------------------------------------------------------------- | Data and other attributes defined here: | | __annotations__ = {'description': 'Optional[str]', 'domain': 'Optional... | | __dataclass_fields__ = {'description': Field(name='description',type='... | | __dataclass_params__ = _DataclassParams(init=True,repr=True,eq=True,or... | | __match_args__ = ('name', 'description', 'params', 'variables', 'domai... | | description = None | | domain = None | | factory_function = None | | ground_truth = None | | name = None | | params = None | | plotter = None | | run = None | | variables = None
... or use the describe function:
from autora.experiment_runner.synthetic.utilities import describe
print(describe(s))
Exponential Learning Args: p_asymptotic: additive bias on constant multiplier lr: learning rate maximum_initial_value: upper bound for initial p value minimum_initial_value: lower bound for initial p value minimum_trial: upper bound for exponential constant name: name of the experiment resolution: number of allowed values for stimulus Examples: >>> s = exp_learning() >>> s.run(np.array([[.2,.1]]), random_state=42) P_asymptotic trial performance 0 0.2 0.1 0.205444
The synthetic experiement s
has properties like the name of the experiment:
s.name
'Exponential Learning'
... a valid variables description:
s.variables
VariableCollection(independent_variables=[IV(name='P_asymptotic', value_range=(0, 0.5), allowed_values=array([0. , 0.00505051, 0.01010101, 0.01515152, 0.02020202, 0.02525253, 0.03030303, 0.03535354, 0.04040404, 0.04545455, 0.05050505, 0.05555556, 0.06060606, 0.06565657, 0.07070707, 0.07575758, 0.08080808, 0.08585859, 0.09090909, 0.0959596 , 0.1010101 , 0.10606061, 0.11111111, 0.11616162, 0.12121212, 0.12626263, 0.13131313, 0.13636364, 0.14141414, 0.14646465, 0.15151515, 0.15656566, 0.16161616, 0.16666667, 0.17171717, 0.17676768, 0.18181818, 0.18686869, 0.19191919, 0.1969697 , 0.2020202 , 0.20707071, 0.21212121, 0.21717172, 0.22222222, 0.22727273, 0.23232323, 0.23737374, 0.24242424, 0.24747475, 0.25252525, 0.25757576, 0.26262626, 0.26767677, 0.27272727, 0.27777778, 0.28282828, 0.28787879, 0.29292929, 0.2979798 , 0.3030303 , 0.30808081, 0.31313131, 0.31818182, 0.32323232, 0.32828283, 0.33333333, 0.33838384, 0.34343434, 0.34848485, 0.35353535, 0.35858586, 0.36363636, 0.36868687, 0.37373737, 0.37878788, 0.38383838, 0.38888889, 0.39393939, 0.3989899 , 0.4040404 , 0.40909091, 0.41414141, 0.41919192, 0.42424242, 0.42929293, 0.43434343, 0.43939394, 0.44444444, 0.44949495, 0.45454545, 0.45959596, 0.46464646, 0.46969697, 0.47474747, 0.47979798, 0.48484848, 0.48989899, 0.49494949, 0.5 ]), units='performance', type=<ValueType.REAL: 'real'>, variable_label='Asymptotic Performance', rescale=1, is_covariate=False), IV(name='trial', value_range=(1, 100), allowed_values=array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 42., 43., 44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55., 56., 57., 58., 59., 60., 61., 62., 63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73., 74., 75., 76., 77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90., 91., 92., 93., 94., 95., 96., 97., 98., 99., 100.]), units='trials', type=<ValueType.REAL: 'real'>, variable_label='Trials', rescale=1, is_covariate=False)], dependent_variables=[DV(name='performance', value_range=(0, 1.0), allowed_values=None, units='performance', type=<ValueType.REAL: 'real'>, variable_label='Performance', rescale=1, is_covariate=False)], covariates=[])
... now we can generate the full domain of the data
x = s.domain()
x
array([[ 0. , 1. ], [ 0. , 2. ], [ 0. , 3. ], ..., [ 0.5, 98. ], [ 0.5, 99. ], [ 0.5, 100. ]])
... the experiment_runner which can be called to generate experimental results:
experiment_data = s.run(x)
experiment_data
P_asymptotic | trial | performance | |
---|---|---|---|
0 | 0.0 | 1.0 | 0.036016 |
1 | 0.0 | 2.0 | 0.068082 |
2 | 0.0 | 3.0 | 0.085388 |
3 | 0.0 | 4.0 | 0.112860 |
4 | 0.0 | 5.0 | 0.130249 |
... | ... | ... | ... |
9995 | 0.5 | 96.0 | 0.990840 |
9996 | 0.5 | 97.0 | 0.976043 |
9997 | 0.5 | 98.0 | 0.972922 |
9998 | 0.5 | 99.0 | 0.969821 |
9999 | 0.5 | 100.0 | 0.976879 |
10000 rows × 3 columns
... a function to plot the ground truth (no noise):
s.plotter()
... against a fitted model if it exists:
from sklearn.linear_model import LinearRegression
ivs = [iv.name for iv in s.variables.independent_variables]
dvs = [dv.name for dv in s.variables.dependent_variables]
X = experiment_data[ivs]
y = experiment_data[dvs]
model = LinearRegression().fit(X, y)
s.plotter(model)
/Users/younesstrittmatter/Documents/GitHub/AutoRA/autora-synthetic/venv/lib/python3.11/site-packages/sklearn/base.py:464: UserWarning: X does not have valid feature names, but LinearRegression was fitted with feature names warnings.warn( /Users/younesstrittmatter/Documents/GitHub/AutoRA/autora-synthetic/venv/lib/python3.11/site-packages/sklearn/base.py:464: UserWarning: X does not have valid feature names, but LinearRegression was fitted with feature names warnings.warn( /Users/younesstrittmatter/Documents/GitHub/AutoRA/autora-synthetic/venv/lib/python3.11/site-packages/sklearn/base.py:464: UserWarning: X does not have valid feature names, but LinearRegression was fitted with feature names warnings.warn(
We can wrap this functions to use with the state logic of AutoRA: First, we create the state with the variables:
from autora.state import StandardState, on_state, experiment_runner_on_state, estimator_on_state
from autora.experimentalist.grid import grid_pool
from autora.experimentalist.random import random_sample
# We can get the variables from the runner
variables = s.variables
# With the variables, we initialize a StandardState
state = StandardState(variables)
Wrap the experimentalists in on_state
function to use them on state:
# Wrap the functions to use on state
# Experimentalists:
pool_on_state = on_state(grid_pool, output=['conditions'])
sample_on_state = on_state(random_sample, output=['conditions'])
state = pool_on_state(state)
state = sample_on_state(state, num_samples=2)
print(state.conditions)
P_asymptotic trial 8380 0.419192 81.0 2670 0.131313 71.0
Wrap the runner with the experiment_runner_on_state
wrapper to use it on state:
# Runner:
run_on_state = experiment_runner_on_state(s.run)
state = run_on_state(state)
state.experiment_data
P_asymptotic | trial | performance | |
---|---|---|---|
8380 | 0.419192 | 81.0 | 0.955751 |
2670 | 0.131313 | 71.0 | 0.892922 |
Wrap the regressor with the estimator_on_state
wrapper:
theorist = LinearRegression()
theorist_on_state = estimator_on_state(theorist)
state = theorist_on_state(state)
# Access the last model:
model = state.models[-1]
print(f"performance = "
f"{model.coef_[0][0]:.2f}*P_asymptotic "
f"{model.coef_[0][1]:.2f}*trial "
f"{model.intercept_[0]:+.2f} ")
performance = 0.00*P_asymptotic 0.01*trial +0.45