Employing a neural network to solving the repetition spacing problem 
The repetition spacing problem consists in computing optimum interrepetition intervals in the process of human learning. The intervals are computed for individual pieces of information (later called items) and for a given individual. The entire input data are grades obtained by the student in repetitions of items in the learning process. This problems has until now be most effectively solved by means of a successive series of algorithms known commercially as SuperMemo and developed by Dr. Wozniak at SuperMemo World, Poland. Wozniak’s model of memory used in developing the most recent version of the algorithm (Algorithm SM8) cannot be considered as the ultimate algebraic description of human longterm memory. Most notably, the relationship between the complexity of the synaptic pattern and item difficulty is not well understood. More light on this relationship might be shed once a neural network is employed to provide adequate mapping between the memory status, grading and the item difficulty.
Using current stateoftheart solutions, the technical feasibility of a neural network application in a realtime learning process seems to depend on the appropriate application of the understanding of the learning process to adequately define the problems that will be posed to the neural network. It would be impossible to expect the network to generate the solution upon receiving the input in the form of the history of grades given in the course of repetitions of thousands of items. The computational and space complexity of such an approach would naturally run well beyond the network’s ability to learn and respond in real time.
Using Wozniak’s model of two components of longterm memory we postulate that the following neural network solution might result in fast convergence and high repetition spacing accuracy.
The two memory variables needed to describe the state of a given engram are retrievability (R) and stability (S) of memory (Wozniak, Gorzelanczyk, Murakowski, 1995). The following equation relates R and S:
(1) R=e^{k/S*t}
where:
 k is a constant
 t is time
By using Eqn (1) we conclude about changes of retrievability in time at a given stability, as well as we can determine the optimum interrepetition interval for given stability and given forgetting index.
The exact algebraic shape of the function that describes the change of stability upon a repetition is not known. However, experimental data indicate that stability usually increases from 1.3 to 3 times for properly timed repetitions and depends on item difficulty (the greater the difficulty the lower the increase). By providing the approximation of the optimum repetition spacing taken from experimental data as produced by optimization matrices of Algorithm SM8, the neural network can be pretrained to compute the stability function:
(2) S_{i+1}=f_{s}(R,S_{i},D,G)
where:
 S_{i }is stability after the ith repetition
 R is retrievability before repetition
 D is item difficulty
 G is grade given in the ith repetition
The stability function is the first function to be determined by the neural network. The second one is the item difficulty function with analogous input parameters:
(3) D_{i+1}=f_{d}(R,S,D_{i},G)
where:
 D_{i} is item difficulty approximation after the ith repetition
 R is retrievability before repetition
 S_{ }is stability after the ith repetition
 G is grade given in the ith repetition
Consequently, a neural network with four inputs (D, R, S and G) and two outputs (S and D) can be used to encapsulate the entire knowledge needed to compute interrepetition intervals (see Implementation of the repetition spacing neural network).
The following approach will be taken in order to verify the feasibility of the aforementioned approach:
 Pretraining of the neural network will be done on the basis of approximated S and D functions derived from functions used in Algorithm SM8 and experimental data collected thereof
 Such a pretrained network will be implemented as a SuperMemo PlugIn DLL that will replace standard sm8opt.dll used by SuperMemo 8 for Windows. The teaching of the network will continue in a real learning process in alpha testing of the neural network DLL. A procedure designed specifically for the purpose of the experiment will be used to provide cumulative results and a resultant neural network. The procedure will use neural networks used in alpha testing for training the network that will take part in betatesting. The alphatesting networks will be fed with a matrix of input parameters and their output will be used in as training data for the resultant network
 In the last step, betatesting of the neural network will be open to all volunteers over the Internet directly from the SuperMemo Website. The volunteers will only be asked to submit their resultant networks for the final stage of the experiment in which the ultimate network will be developed. Again, the betatesting networks will all be used to train the resultant network. Future users of neural network SuperMemo (if the project appears successful) will obtain a network with a fair understanding of the human memory and able to further refine its reactions to the interference of the learning process with daytoday activities of a particular student and particular study material.
The major problem in all spacing algorithms is the delay between comparing the output of the function of optimum intervals with the result of applying a given interrepetition interval in practise. On each repetition, the state of the network from the previous repetition must be remembered in order to generate the new state of the network. In practise, this equates to storing an enormous number of network states inbetween repetitions.
Luckily, Wozniak’s model implies that functions S and D are time independent (interestingly, they are also likely to be user independent!); therefore, the following approach may be taken for simplifying the procedure:
Time moment 
T_{1} 
T_{2} 
T_{3} 
Decision 
I_{1} N_{1} O_{1}=N_{1}(I_{1}) 
I_{2} N_{2} O_{2}=N_{2}(I_{2}) 
I_{3} N_{3} O_{3}=N_{3}(I_{3}) 
Result of previous decision 
O^{*}_{1} E_{1}=O^{*}_{1}O_{1} 
O^{*}_{2} E_{2}=O^{*}_{2}O_{2} 

Evaluation for teaching 
O^{'}_{1}=N_{2}(I_{1}) E^{'}_{1}=O^{*}_{1}O^{'}_{1} 
O^{'}_{2}=N_{3}(I_{2}) E^{'}_{2}=O^{*}_{2}O^{'}_{2} 
Where:
 E_{i }is an Error bound with O_{i} (see error correction for memory stability and error correction for item difficulty)
 E^{'}_{i }is an Error bound with O^{'}_{i}
 I_{i} are input data at T_{i}
 N_{i} is the network state at T_{i}
 O_{i} is an output decision of N_{i} being given I_{i}, that is the decision after ith repetition made at T_{i}
 O^{*}_{i} is an optimum output decision, that should be obtained at T_{i} instead of O_{i}; it can be computed from the grade and O_{i} (the grade indicates how O_{i} should have changed to obtain better approximation)
 O^{'}_{i} is an output decision of N_{i+1} given I_{i}, that is the decision after ith repetition that would be made at T_{i+1}
 T_{i} is time of the ith repetition of a given item
The above approach requires only I_{i1} to be stored for each item between repetitions taking place at T_{i1} and T_{i} with substantial saving to the amount of data stored during the learning process (E'_{i} is as valuable for training as E_{i}). This way the proposed solution is comparable for its space complexity with the Algorithm SM8! Only one (current) state of the neural network has to be remembered throughout the process.
These are the present implementation assumptions for the discussed project:
 neural network: unidirectional, layered, with resilient backpropagation; an input layer with four neurons, an output layer with two neurons, and two hidden layers (15 neurons each)
 item difficulty interpretation: same as in Algorithm SM8, i.e. defined by AFactor
 each item stores: Last repetition date, Stability (at last repetition), Retrievability (at last repetition), Item difficulty, Last grade
 default forgetting index: 10%
 network DLL input (at each repetition): item number and the current grade
 network DLL output (at each repetition): next repetition date
 neural network DLL implementation language: C++
 neural network DLL shell, SuperMemo 98 for Windows (same as the 32bit SM8OPT.DLL shell)