Introduction

The hippocampus plays a critical role in spatial and episodic memory in mammals (Morris et al., 1982; Squire et al., 2004). Place cells in the hippocampus exhibit spatial tuning, firing selectively in specific locations of a spatial environment (Moser et al., 2008; O’Keefe and Dostrovsky, 1971). During sleep and quiet wakefulness, place cells show a time-compressed reactivation of spike sequences corresponding to recent experiences (Wilson and McNaughton, 1994; Foster and Wilson, 2006), known as replay. These replay events are thought to be important for memory consolidation, often referred to as memory replay (Carr et al., 2011).

The CA3 region of the hippocampus is a highly recurrently connected region that is the primary site of replay generation in the hippocampus. Input from CA3 supports replay in CA1 (Csicsvari et al., 2002; Yamamoto and Tonegawa, 2017; Nakashiba et al., 2008; Nakashiba et al., 2009), and peri-ripple spiking in CA3 precedes that of CA1 (Nitzan et al., 2022). The recurrent connections support intrinsically generated bursts of activity that propagate through the network.

Most replay models rely on a recurrent network structure in which a map of the environment is encoded in the recurrent connections of CA3 cells, such that cells with nearby place fields are more strongly connected. Some models assume this structure is pre-existing (Haga and Fukai, 2018; Pang and Fairhall, 2019), and some show how it could develop over time through synaptic plasticity (Theodoni et al., 2018; Jahnke et al., 2015). However, in novel environments place cells remap immediately in a seemingly random fashion (Leutgeb et al., 2005; Muller and Kubie, 1987). The CA3 region, in particular, undergoes pronounced remapping (Leutgeb et al., 2004; Leutgeb et al., 2005; Alme et al., 2014). A random remapping of place fields in such models that rely on environment-specific recurrent connectivity between place cells would lead to recurrent connections that are random with respect to the novel environment, and thus would not support replay of the novel environment.

Rather, these models require a pre-existing structure of recurrent connections to be created for each environment. A proposed solution to account for remapping in hippocampal models is to assume the existence of multiple independent and uncorrelated spatial maps stored within the connections between cells. In this framework, the maximum number of maps is reached when the noise induced via connections needed for alternative maps becomes too great for a faithful rendering of the current map (Samsonovich and McNaughton, 1997; Battaglia and Treves, 1998; Azizi et al., 2013). However, experiments have found that hippocampal representations remain uncorrelated, with no signs of representation re-use, after testing as many as 11 different environments in rats (Alme et al., 2014).

Rather than re-using a previously stored map, another possibility is that a novel map for a novel environment is generated de novo through experience-dependent plasticity while in the environment. Given the timescales of synaptic and structural plasticity, one might expect that significant experience within each environment is needed to produce each new map. However, replay can occur after just 1-2 laps on novel tracks (Foster and Wilson, 2006; Berners-Lee et al., 2022), which means that the synaptic connections that allow the generation of the replayed sequences must already be present. Consistent with this expectation, it has been found that decoded sequences during sleep show significant correlations when decoded by place fields from future, novel environments. This phenomenon is known as preplay and has been observed in both rodents (Dragoi and Tonegawa, 2011; Dragoi and Tonegawa, 2013; Grosmark and Buzsaki, 2016; Liu et al., 2018) and humans (Vaz et al., 2023).

The existence of both preplay and immediate replay in novel environments suggests that the preexisting recurrent connections in the hippocampus that generate replay are somehow correlated with the pattern of future place fields that arise in novel environments. To reconcile these experimental results, we propose a model of intrinsic sequence generation based on randomly clustered recurrent connectivity, without reliance on pre-existing environment maps. Such clustering, also observed in cortex (Song et al., 2005), naturally arises from a combination of Hebbian and homeostatic plasticity in recurrent networks (Bourjaily and Miller, 2011; Litwin-Kumar and Doiron, 2014; Lynn et al., 2022), and spontaneously develops in networks of cultured hippocampal neurons (Antonello et al., 2022).

As an animal gains experience in an environment, the pattern of recurrent connections of CA3 would be shaped by Hebbian plasticity (Debanne et al., 1998; Mishra et al., 2016). Relative to CA1, which has little recurrent connectivity, CA3 has been found to have both more stable spatial tuning and a stronger functional assembly organization, consistent with the hypothesis that spatial coding in CA3 is influenced by its recurrent connections (Sheintuch et al., 2023). Gaining experience in different environments would then be expected to lead to individual place cells participating in multiple formed clusters. Such overlapping clustered connectivity may be a general feature of any hippocampal and cortical region that has typical Hebbian plasticity rules. Sadovsky et al., 2014 found such structure in the spontaneous activity of excitatory neurons in primary visual cortex, where cells formed overlapping but distinct functional clusters. Further, such preexisting clusters may help explain the correlations that have been found in otherwise seemingly random remapping (Kinsky et al., 2018; Whittington et al., 2020).

Since our model relies on its random recurrent connections for propagation of activity through the network during spontaneous activity, we also sought to assess the extent to which the internal activity within the network can generate place cells with firing rate peaks at a location where they do not receive a peak in their external input. Our reasoning is that landmarks in the environment, such as boundaries or corners, provide location-specific visual input to an animal, but locations between such features are primarily indicated by their distance from them, which in our model is represented by reduction in the landmark-specific input. One can therefore equate our model’s inputs as corresponding to boundary cells (Savelli et al., 2008; Solstad et al., 2008; Bush et al., 2014), and the place fields between boundaries are generated by random internal structure within the network.

In our implementation of this model, we find that spontaneous sequences of spikes generated by a randomly clustered network can be decoded as spatial trajectories without relying on pre-configured, environment-specific maps. Because the network contains neither a preexisting map of the environment nor experience-dependent plasticity, we refer to the spike-sequences it generates as preplay. However, the model can also be thought of as a preexisting network in which immediate replay in a novel environment can be expressed and then reinforced through experience-dependent plasticity. We find that preplay in this model occurs most strongly when the network parameters are tuned to generate networks that have a small-world structure (Watts and Strogatz, 1998; Humphries et al., 2006; Humphries et al., 2008). Our results support the idea that preplay and immediate replay could be a natural consequence of the preexisting recurrent structure of the hippocampus.

Results

The model

We propose a model of preplay and immediate replay based on randomly clustered recurrent connections (Figure 1). In prior models of preplay and replay, a preexisting map of the environment is typically assumed to be contained within the recurrent connections of CA3 cells, such that cells with nearby place fields are more strongly connected (Figure 1a). While this type of model successfully produces replay (Haga and Fukai, 2018; Pang and Fairhall, 2019), such a map would only be expected to exist in a familiar environment, after experience-dependent synaptic plasticity has had time to shape the network (Theodoni et al., 2018). It remains unclear how, in the absence of such a preexisting map of the environment, the hippocampus can generate both preplay and immediate replay of a novel environment.

Illustration of the randomly clustered model

(a) Schematic diagram of prior replay models that rely on preexisting environment-specific structure, wherein each cell receives uniquely tuned Gaussian-shaped feed-forward inputs to define the place fields, and cells with nearby place fields are recurrently connected. Pairs of cells with closest place fields are connected most strongly (thicker arrows). (b) Schematic diagram of our model, where neurons are randomly placed into clusters and all neurons receive the same spatial and contextual information but with random, cluster-dependent input strengths. (c) Example representation of the network (8 clusters, mean cluster participation per cell of 1.5). Excitatory cells (each symbol) are recurrently connected with each other and with inhibitory cells (“Feedback inhibition”, individual inhibitory cells not shown) and receive feed forward input (“Sensory input”). Symbol color indicates neurons’ membership in clusters 1 and 2, with ∼ meaning not in the cluster. Symbol size scales with the number of clusters a neuron is in. Lines show connections between neurons that are in cluster 2. Symbol positions are plotted based on a t-distributed stochastic neighbor embedding (t-SNE) of the connection matrix, which reveals the randomly overlapping clusters. (d-f) Histograms based on the network in (c) of: (d) the distribution of input strengths; (e) the number of clusters that each neuron is a member of; and (f) the fraction of the excitatory cells to which each excitatory cell connects. (g) The Small-World Index (SWI) of the excitatory connections varies with the number of clusters and the mean number of clusters of which each neuron is a member (“cluster participation”). The median value of the SWI from 10 networks at each parameter point is plotted. The red dashed line shows a contour line where SWI = 0.4. Regions in white are not possible due to either cluster participation exceeding the number of clusters (lower right) or cells not being able to connect to enough other cells to reach the target global connectivity pc (upper left).

Our proposed alternative model is based on a randomly clustered recurrent network with random feed-forward inputs (Figure 1b). In our model, all excitatory neurons are randomly assigned to overlapping clusters that constrain the recurrent connectivity, and they all receive the same linear spatial and contextual input cues which are scaled by randomly drawn, cluster-dependent connection weights (see Methods).

An example network with 8 clusters and cluster participation of 1.5 (the mean number of clusters to which an excitatory neuron belongs) is depicted in Figure 1c. Excitatory neurons are recurrently connected to each other and to inhibitory neurons. Inhibitory cells have cluster-independent connectivity, such that all E-to-I and I-to-E connections exist with a probability of 0.25. Feed-forward inputs are independent Poisson spikes with random connection strength for each neuron (Figure 1d). Excitatory cells are randomly, independently assigned membership to each of the clusters in the network. All neurons are first assigned to one cluster, and then randomly assigned additional clusters to reach the target cluster participation (Figure 1e). Given the number of clusters and the cluster participation, the within-cluster connection probability is calculated such that the global connection probability matches the parameter pc = 0.08 (Figure 1f). The left peak in the distribution shown in Figure 1f is from cells in a single cluster and the right peak is from cells in two clusters, with the long tail corresponding to cells in more than two clusters.

For a given pc, excitatory connectivity is parameterized by the number of clusters in the network and the mean cluster participation. The small-world index (SWI; Neal, 2015; Neal, 2017) systematically varies across this 2-D parameterization (Figure 1g). A high SWI indicates a network with both clustered connectivity and short path lengths (Watts and Strogatz, 1998). For a fixed connection probability, SWI increases with more clusters and lower cluster participation, so long as cluster participation is greater than one to ensure sparse overlap of (and hence connections between) clusters. Networks in the top left corner of Figure 1g are not possible, since in that region all within-cluster connections are not sufficient to match the target global connectivity probability, pc. Networks in the bottom right are not possible because otherwise mean cluster participation would exceed the number of clusters. The dashed red line shows an example contour line where SWI = 0.4.

Example activity

Our randomly clustered model produces both place fields and preplay with no environment-specific plasticity or preexisting map of the environment (Figure 2). Example place cell activity shows spatial specificity during linear track traversal (Figure 2a-c). Although the spatial tuning is noisy, this is consistent with the experimental finding that the place fields that are immediately expressed in a novel environment require experience in the environment to stabilize and improve decoding accuracy (Tang and Jadhav, 2022; Shin et al., 2019; Hwaun and Colgin, 2019). Raster plots of network spiking activity (Figure 2a) and example cell membrane potential traces (Figure 2b) demonstrate selective firing in specific track locations. Place fields from multiple networks generated from the same parameters, but with different input and recurrent connections, show spatial tuning across the track (Figure 2c).

Spatially correlated reactivations in networks without environment-specific connectivity or plasticity

(a-f) Example activity from the fiducial parameter set (15 clusters, mean cluster participation of 1.25). (a) Example raster plot from one place-field trial. Cells sorted by trial peak. (b) Example membrane traces from two of the cells in (a). (c) Place fields from 10 different networks generated from the same parameter set, sorted by peak location and normalized by peak rate. (d) Example raster plot (top) and population firing rate (bottom; blue line) showing preplay in a simulation of sleep. Horizontal dashed black line is the mean population rate across the simulation. Horizontal dashed red line is the threshold for detecting a population-burst event (PBE). PBEs that exceeded the threshold for at least 50 ms and had at least 5 participating cells were included in the preplay decoding analysis. Grey bars highlight detected events. (e) Example replay event (Top, raster plot. Bottom, Bayesian decoding of position). Event corresponds to the center event in (d). Raster includes only participating cells. The blue line shows the weighted correlation of decoded position across time. (f) Nine example decoded events from the same networks in (c). The width of each time bin is 10 ms. The height spans the track length. Same color scale as in (e). r is each event’s absolute weighted correlation. jd is the maximum normalized jump in peak position probability between adjacent time bins. The same event in (e) is shown with its corresponding statistics in the center of the top row. Preplay statistics calculated as in Farooq et al., 2019.

To test the ability of the model to produce preplay, we simulated sleep sessions in the same networks. Sleep sessions were simulated in a similar manner to the running sessions but with no location cue inputs active and a different, unique set of context cue inputs active to represent the sleep context. The strength of the context cue inputs to the excitatory and inhibitory cells were scaled in order to generate an appropriate level of network activity, to account for the absence of excitatory drive from the location inputs (see Methods). During simulated sleep, the network produces structured spontaneous activations resembling preplay (Figure 2d-f). Example raster and population rate plots demonstrate spontaneous transient increases in spiking that exceed 1 standard deviation above the mean population rate denoting population burst events (PBEs; Figure 2d). We considered PBEs that lasted at least 50 ms and contained at least 5 participating cells candidates for Bayesian decoding (Shin et al., 2019). Bayesian decoding of an example PBE using the simulated place fields reveals a spatial trajectory (Figure 2e). We use the same two statistics as Farooq et al. (2019) to quantify the quality of the decoded trajectory: the absolute weighted correlation (r) and the maximum jump distance (jd; Figure 2f). The absolute weighted correlation of a decoded event is the absolute value of the linear Pearson’s correlation of space-time weighted by the event’s derived posteriors. Since sequences can correspond to either direction along the track, the sign of the correlation simply indicates direction while the absolute value indicates the quality of preplay. The maximum jump distance of a decoded event is the maximum jump in the location of peak probability of decoded position across any two adjacent 10-ms time bins of the event’s derived posteriors. A high-quality event will have a high absolute weighted correlation and a low maximum jump distance.

Together, these results demonstrate that the model can reproduce key dynamics of hippocampal place cells, including spatial tuning and preplay, without relying on environment-specific recurrent connections.

Place Fields

To compare the place fields generated by the model to those from hippocampal place cells of rats, we calculated several place-field statistics for both simulated and experimentally recorded place fields (Figure 3). Because our model assumes no previous environment-specific plasticity, we analyzed data from place cells in rats on their first exposure to a W-track (Shin et al., 2019). Equivalent statistics of place-field peak rate, sparsity, and spatial information are shown for experimental data (Figure 3a) and simulations (Figure 3b). We found that the model produces qualitatively similar (but not quantitatively identical) distributions for the fiducial parameter set.

The model produces place fields with similar properties to hippocampal place fields

(a) Place field statistics for hippocampal place fields recorded in rats upon their first exposure to a W-track (Shin et al., 2019). Left, place-field peak rate (Hz). Center, place-field specificity (fraction of track). Right, place-field spatial information (bits/spike). (b) Same as (a) but for place fields from a set of 10 simulated networks at one parameter point (15 clusters and mean cluster participation of 1.25). (c) Network parameter dependence of place-field statistics. For each parameter point, the color indicates the mean over all place fields from 10 networks. Top row: mean statistics corresponding to the same measures of place fields used in panels (a, b). Bottom left: mean firing rate of the inhibitory cells. Bottom center: the KL-divergence of the distribution of place-field peaks relative to a uniform spatial distribution. Bottom right: fraction of place-field peaks peaked in the central third of the track.

These place-field properties depend on the network parameters (Figure 3c). With fewer clusters and lower cluster overlap (lower cluster participation), place fields have higher peak rates, sparsity, and spatial information (Figure 3c, top row and bottom left). However, lower overlap reduces the uniformity of place-field locations, measured by KL-divergence (Figure 3c bottom middle) and the fraction of place fields in the central third of the track (Figure 3c bottom right).

Preplay

Having found that the model produces realistic place-field representations with neither place-field like inputs nor environment-specific spatial representation in the internal network connectivity (Figure 3), we next examined whether the same networks could generate spontaneous preplay of novel environments. To test this, for the same set of networks characterized by place-field properties in Figure 3, we simulated sleep activity by removing any location-dependent input cues and analyzed the resulting spike patterns for significant sequential structure resembling preplay trajectories (Figure 4). We find significant preplay in both our reference experimental data set (Shin et al., 2019; Figure 4a, b; see Figure 4—figure supplement 1 for example events) and our model (Figure 4c, d) when analyzed by the same methods as Farooq et al., 2019. For each detected event we calculated its absolute weighted correlation. We then generated 100 time-bin shuffles of each event, and for each shuffle recalculated the absolute weighted correlation to generate a null distribution of absolute weighted correlations. The distribution of absolute weighted correlations of actual events was significantly greater than the distribution of absolute weighted correlations of shuffled events for both the experimental data (Figure 4a, KS-test, p=2x10-12, KS-statistic=0.078) and the simulated data (Figure 4c, KS-test, p=3x10-16, KS-statistic=0.29). Additionally, we found that this result is robust to random subsampling of cells in our simulated data (Figure 4—figure supplement 2). Our analyses of the hippocampal data produce similar results when analyzing each trajectory independently (Figure 4—figure supplement 3).

Preplay depends on modest cluster overlap

(a,c) The cumulative distribution function (CDF) of the absolute weighted correlations for actual events (blue line) versus shuffled events (red dashed line) of experimental data from Shin at al., 2019 (a; KS-test, p=2×10-12, KS-statistic=0.078) and simulated data (c; KS-test, p=3×10-16, KS-statistic=0.29) reveal results similar to those in Figure 1h of Farooq et al., 2019. *** p<0.001. (b,d) P-value grids (p-value indicated logarithmically by color) showing that the actual decoded events are higher quality sequences than shuffles across a wide range of quality thresholds for both experimental data from Shin et al., 2019 (b) and simulated data (d). For each point on the grid the fraction of events that exceed the absolute weighted correlation threshold (y-axis) and don’t exceed the maximum jump distance (x-axis) is calculated, and the significance of this fraction is determined by comparison against a distribution of corresponding fractions from shuffled events. Black squares indicate criteria that were not met by any events (either shuffled or actual). The panel is equivalent to Figure 1e of Farooq et al., 2019. (e) Network parameter dependence of several statistics quantifying the population-burst events. Top left, fraction of excitatory cells firing per event. Top right, mean excitatory cell firing rate (Hz). Bottom left, mean event duration (s). Bottom right, mean event frequency (Hz). Each point is the mean of data combined across all population-burst events of 10 simulated networks at each parameter point. Data from the same simulations as Figure 3. (f) Network parameter dependence of several statistics quantifying the Bayesian decoding. Top left, p-value of the absolute weighted correlations (from a KS-test as calculated in (c)). Top right, the shift in the median absolute weighted correlation of actual events relative to shuffle events. Bottom left, the fraction of events with significant absolute weighted correlations relative to the distribution of absolute weighted correlations from time bin shuffles of the event. Bottom right, the mean entropy of the position probability of all time bins in decoded trajectories.

For each event, we also calculated the maximum spatial jump of the peak probability of decoded position between any two adjacent time bins as a measure of the continuity of the decoded trajectory. The absolute weighted correlation (high is better) and maximum jump (low is better) were then two different measures of the quality of a decoded trajectory. We performed a bootstrap test that took both of these measures into account by setting thresholds for a minimum absolute weighted correlation and a maximum jump distance and then calculating the fraction of events meeting both criteria of quality. The significance of the fraction of events meeting both criteria was then determined by comparing it against a distribution of such fractions generated by sets of the time-bin shuffled events. We systematically varied both thresholds and found that the actual events are of significantly higher quality than chance for a wide range of thresholds in both the hippocampal (Figure 4b) and simulated (Figure 4d) data. The upper right corner of these grids cannot be significant since 100% of all possible events would be included in any shuffle or actual set. Points in the left-most column are not all significant because the strictness of the maximum jump distance means that very few events in either the actual or shuffled data sets meet the criterion, and therefore the analysis is underpowered. This pattern is similar to that seen in Farooq et al., 2019 (as shown in their Figure 1e).

Both PBEs and preplay are significantly affected by the two network parameters (Figure 4c, d). The number of clusters and the extent of cluster overlap (indicated via mean cluster participation) affects PBE participation (Figure 4c, top left), firing rates (Figure 4c, top right), event durations (Figure 4c, bottom left), and event frequency (Figure 4c, bottom right). We find that significant preplay occurs only at moderate cluster overlap (Figure 4d, top left), where we also find the greatest increase from chance in the linearity of decoded trajectories (Figure 4d, top right). The fraction of events that are individually significant (determined by comparing the absolute weighted correlation of each decoded event against the set of absolute weighted correlations of its own shuffles) is similarly highest for modest cluster overlap (Figure 4d, bottom left). The mean entropy of position probability of each time bin of decoded trajectories is also highest for modest cluster overlap (Figure 4d, bottom right), meaning that high cluster overlap leads to more diffuse, less precise spatial decoding.

Preplay is due to successive activations of individual clusters

Figure 4f indicates that PBEs are best decoded as preplay when cluster participation is only slightly above one, indicating a small, but non-zero, degree of cluster overlap. We hypothesized that this can be explained as balancing two counteracting requirements: 1) Sufficient cluster overlap is necessary for a transient increase in activity in one cluster to induce activity in another cluster, so as to extend any initiated trajectory; and 2) Sufficient cluster isolation is necessary so that, early in a transient, spikes from an excited cluster preferentially add excitement to the same cluster. A network with too much cluster overlap will fail to coherently excite individual clusters—rendering decoded positions to be spread randomly throughout the track—while a network with too little cluster overlap will fail to excite secondary clusters—rendering decoded positions to remain relatively localized.

We find that the dependence of preplay on cluster overlap can indeed be explained by the manner in which clusters participate in PBEs (Figure 5). An example PBE (Figure 5a) shows transient recruitment of distinct clusters, with only one cluster prominently active at a time. We define a cluster as ‘active’ if its firing rate exceeds twice the rate of any other cluster. We calculated the number of active clusters per event (Figure 5b) and the duration of each active cluster period (Figure 5d). We find that these statistics vary systematically with the network parameters (Figure 5c, e), in a manner consistent with the dependence of preplay on cluster overlap (Figure 4f). When there is modest overlap of an intermediate number of clusters, events involve sequential activation of multiple clusters that are each active sufficiently long to correspond to at least one of the time bins used for decoding (10 ms). Figures 4 and 5 together indicate that high-quality preplay arises via a succession of individually active clusters. Such succession requires a moderate degree of cluster overlap, but this must be combined with sufficient cluster isolation to promote independent activation of just one cell assembly for the duration of each time-bin used for decoding.

Coherent spiking within clusters supports preplay

(a) Example event. Top, spike rates averaged across neurons of individual clusters: Each firing rate curve is the smoothed mean firing rate across the population of cells belonging to each cluster. We defined clusters as “active” if at any point their rates exceed twice that of any other cluster. Three clusters meet the criterion of being active (green, then red, then blue). Bottom, raster plots: Cells belonging to each of the active clusters are plotted separately in the respective colors. Cells in multiple clusters contribute to multiple population curves, and cells in multiple active clusters appear in multiple rows of the raster plot. Cells that participate but are not in any active clusters are labeled “Other cells” and plotted in black. Only active cells are plotted. (b) For the fiducial parameter set (15 clusters, mean cluster participation of 1.25), the distribution over events of the number of active clusters per event. (c) The mean number of active clusters per event as a function of the network parameters. Same data as that used for the parameter grids in earlier figures. (d) For the fiducial parameter set (15 clusters, mean cluster participation of 1.25), the distribution of durations of active clusters for all active cluster periods across all events. The active duration was defined as the duration for which an active cluster remained the most-active cluster. (e) The mean active cluster duration as a function of the network parameters.

Small-world index correlates with preplay

We noticed that that the highest quality of decoded trajectories (Figure 4f) seemed to arise in networks with the highest small-world index (SWI; Figure 1g). In order to test this, we simulated different sets of networks with both increased and decreased global E-to-E connection probability, pc. Changing pc, in addition to varying the number of clusters and the mean cluster participation, impacted the SWI of the networks (Figure 6, left column).

The Small-World Index of networks correlates with preplay quality

(a-c) Left column, the Small-World Index (SWI; plotted as color) is affected by the global E-to-E connection probability, pc. Red dotted line indicates a contour line of SWI = 0.4. This boundary shifts downward as pc increases. Center column, across parameter points in the network parameter grid, SWI correlates with an increase in the median absolute weighted correlation of decoded trajectories relative to shuffles (e.g. this corresponds in Figure 4c to the rightward shift of the CDF of measured absolute weighted correlations relative to the shuffle events). Each point is produced by analysis of all events across 10 networks from one parameter point in the grid on the left. Right column, same as the center column but each point is data from each of the 10 individual networks per parameter set. P-value and correlation, ρ, are calculated from Spearman’s rank-order correlation test. Dashed line is the least-squares fit. (a) Data from a parameter grid where the E-to-E connection probability was decreased by 50% and the E-to-E connection strength was doubled from their fiducial values used in prior figures. (b) Data from the same parameter grid as Figures 3-5. (c) Data from a parameter grid where the E-to-E connection probability was increased by 50% and the E-to-E connection strength scaled by two-thirds from their fiducial values.

We hypothesized that independent of pc, a higher SWI would correlate with improved preplay quality. To test this, we simulated networks across a range of parameters for three pc values: a decrease of pc by 50% to 0.04, the fiducial value of 0.08, and an increase by 50% to 0.12 (Figure 6a-c, respectively). For the decreased and increased pc cases, the E-to-E connection strength was respectively doubled or reduced to 2/3 of the fiducial strength to keep total E-to-E input constant. For each parameter combination, we quantified preplay quality as the rightward shift in median absolute weighted correlation of decoded preplay events versus shuffled events (as in Figure 4f, top right). We then asked if there was a correlation between that quantification of preplay quality and SWI.

Across all three pc values, SWI significantly correlated with improved preplay both across parameter sets (Figure 6, center column) and across individual networks (Figure 6, right column). These results support our prediction that higher small-world characteristics correspond to higher-quality preplay dynamics regardless of average connectivity.

Preplay significantly decodes to linear trajectories in arbitrary environments

Information about each environment enters the network via the feed-forward input connection strengths, which contain cluster-dependent biases. A new environment is simulated by re-ordering those input biases. We first wished to test that a new environment simulated in such a manner produced a distinct set of place fields. We therefore simulated place maps for leftward and rightward trajectories on linear tracks in two distinct environments (Figure 7a). The two maps with different directions of motion showed very high correlations when in the same environment (Figure 7b, blue) while the comparisons of trajectories across environments show very low correlations (Figure 7b, red). We also performed simulations with extra laps of running and calculated the correlations between paired sets of place fields produced by random, independent splits of trials of the same trajectory. The distribution of these correlations was similar to the distribution of within-environment correlations (comparing opposite trajectories with the same spatial input), showing no significant de novo place-field directionality. This is consistent with hippocampal data in which place-field directionality is initially low in novel environments and increases with experience (Frank et al., 2004; Navratilova et al., 2012; Shin et al., 2019).

Trajectories decoded from population-burst events are significantly correlated with linear trajectories in arbitrary environments

(a) Place fields from a single network with simulated runs in both directions of travel on a linear track in two different environments. Each column of panels is the set of place fields for the trajectory labeled on the diagonal. Each row of panels has cells sorted by the order of place-field peaks for the trajectory labeled on the diagonal. The r values are the correlations between the corresponding remapped trajectory with its comparison on the diagonal. Note that correlations mirrored across the diagonal are equal because they correspond only to a change in the labels of the dimensions of the population rate vectors, which does not affect the vector correlation. (b) Distribution of the place-field map correlations across trajectories from both directions of travel on a linear track in two environments for 10 networks. Blue is the distribution of correlations for all left vs right place-field maps from the same environment. Red is the correlations from all pair-wise comparisons of trajectories from different environments. (c) An example event that shows a significant trajectory when it is decoded with place fields from one environment (top row), but not when it is decoded with place fields from another environment (bottom row). (d) An entire set of PBEs shows similar levels of absolute weighted correlations when decoded with different sets of place fields. In color are CDFs of absolute weighted correlations of decoded trajectories with leftward and rightward linear trajectories in each of the two environments (R1 and L1 are the rightward and leftward trajectories of environment one. R2 and L2 are the rightward and leftward trajectories of environment two). In black (all overlapping) are the corresponding absolute weighted correlations with each of the 4 trajectories arising from decoding of shuffled events. (e) The significance of linearity of decoded trajectories indicated by p-value in color (as in Figure 4b) from decoding the same PBEs with the four different environment place fields. Black squares indicate criteria that were not met by any events (either shuffled or actual). Env. 1 left is the same as that shown in Figure 4d.

Because we simulated preplay without any location-specific inputs, we expected that the set of spiking events that significantly decode to linear trajectories in one environment (Figure 4) should decode with a similar fidelity in another environment. Therefore, we decoded each PBE four times, once with the place fields of each trajectory (Figure 7c-e). As expected from the place map correlations (Figure 7a, b), an example event shows similar absolute weighted correlation with the place fields of trajectories from the same environment, but not with the place fields of trajectories from different environments (Figure 7c). The distributions of absolute weighted correlations arising from decoding of PBEs according to each of the four sets of place fields was consistent across environments (Figure 7d, colored lines) and all were significantly rightward shifted (indicating greater absolute weighted correlation) when compared to those absolute weighted correlations arising from the corresponding shuffled events (Figure 7d, overlapping black lines). If we consider both absolute weighted correlation and jump-distance thresholds as in Figure 4d, we find that the matrices of p-values are consistent across environments (Figure 7e). In summary, without environment-specific or place-field dependent pre-assigned internal wiring, the model produces population-burst events, which, as an ensemble, show significant preplay with respect to any selected environment.

Discussion

Our work shows that spontaneous population bursts of spikes that can be decoded as spatial trajectories can arise in networks with clustered random connectivity without pre-configured maps representing the environment. In our proposed model, excitatory neurons were randomly clustered with varied overlap and received feed-forward inputs with random strengths that decayed monotonically from the boundaries of a track (Figure 1). Even though the model neural circuit lacked place-field like input and lacked environment-specific internal wiring, the network exhibited both realistic place fields (Figures 2,3) and spontaneous preplay of novel, future environments (Figures 2,4).

We validated our modeling results by applying the same analyses to a previously collected experimental data set (Shin et al., 2019). Indeed, we replicated the general finding of hippocampal preplay found previously in Farooq et al., 2019, although the p-value matrix for our experimental data (Figure 4b) is significant across a smaller range of threshold values than found in their prior work. This is likely due to differences in statistical power. The pre-experience sleep sessions of Shin et al., 2019 were not longer than half an hour for each animal, while the pre-experience sleep sessions of Farooq et al., 2019 lasted 2-4 hours. However, finding statistically significant hippocampal preplay in an experiment not designed for studying preplay shows that the general result is robust to a number of methodological choices, including shorter recording sessions, use of a W-track rather than linear track, and variations in candidate event detection criterion.

Although our model is a model of the recurrently connected CA3 region and the data set we analyze (Shin et al., 2019) comes from CA1 cells, the qualitative comparisons we make here are nevertheless useful. Despite some statistically significant quantitative differences, the general properties of place fields that we consider are qualitatively similar across CA1 and CA3 (Sheintuch et al., 2023; Harvey et al., 2020), and CA3 and CA1 generally reactivate in a coordinated manner (O’Neil et al., 2008; Karlsson and Frank, 2009).

The model parameters that controlled the clustering of the recurrent connections strongly influenced preplay and place-field quality. Moderate overlap of clusters balanced the competing needs for both a) sufficiently isolated clusters to enable cluster-wise activation and b) sufficiently overlapping clusters to enable propagation of activity across clusters (Figure 5). Such a balance in cluster overlap produces networks with small-world characteristics (Watts and Strogatz, 1998) as quantified by a small-world index (SWI; Neal, 2015; Neal, 2017). Networks with a high SWI, indicating high clustering (if two neurons are connected to the same third neuron, they are more likely than chance to be connected to each other) yet short paths (the mean number of connections needed to traverse from one neuron to any other), showed optimal preplay dynamics (Figure 6). The same networks could flexibly represent distinct remapped environments (Leutgeb et al., 2004; Leutgeb et al., 2005; Alme et al., 2014) solely through differences in scaling of feed-forward spatially linear input (Figure 7).

Across many species, small-world properties can be found at both the local neuronal network scale and the gross scale of the network of brain regions. At the neuronal connection scale, small-world properties have been reported in a number of networks, such as the C. elegans connectome (Watts and Strogatz, 1998; Humphries et al., 2008), the brainstem reticular formation (Humphries et al., 2006), mouse visual cortex (Sadovsky et al., 2014), cultured rat hippocampal neurons (Antonello et al., 2022), mouse prefrontal cortex (Luongo et al., 2016), and connectivity within the entorhinal-hippocampal region in rats (She et al., 2016). At the level of connected brain regions, small-world properties have been reported across the network of brain regions activated by fear memories in mice (Vetere et al., 2016), in the hippocampal-amygdala network in humans (Zhang et al., 2022), and across the entire human brain (Liao et al., 2010).

Our results suggest that the preexisting hippocampal dynamics supporting preplay may reflect general properties arising from randomly clustered connectivity. The model predicts that preplay quality will depend on the network’s balance of cluster isolation and overlap, as quantified by small-world properties. Synaptic plasticity in the recurrent connections of CA3 may primarily serve to reinforce and stabilize intrinsic dynamics, rather than creating spatial maps de novo. The particular neural activity associated with a given experience would then selectively reinforce the relevant intrinsic dynamics, while leaving the rest of the network dynamics unchanged.

Our model provides a general framework for understanding the origin of pre-configured hippocampal dynamics. Hebbian plasticity on independent, previously experienced place maps would produce effectively random clustered connectivity. The spontaneous dynamics of such networks would influence expression of place fields in future, novel environments. Together with intrinsic sequence generation, this could enable preplay and immediate replay generated by the preexisting recurrent connections.

Future modeling work should explore how experience-dependent plasticity may leverage and reinforce the dynamics initially expressed through preexisting clustered recurrent connections to produce higher-quality place fields and decoded trajectories during replay (Shin et al., 2019; Farooq et al., 2019). Plasticity may strengthen connectivity along frequently reactivated spatiotemporal patterns. Clarifying interactions between intrinsic dynamics and experience-dependent plasticity will provide key insights into hippocampal neural activity.

Methods

To investigate what network properties could support preplay, we simulated recurrently connected networks of spiking neurons and analyzed their dynamics using standard hippocampal place cell analyses.

Neuron model

We simulate networks of Leaky Integrate-and-Fire (LIF) neurons, which have leak conductance, gL, excitatory synaptic conductance, gE, inhibitory synaptic conductance, gI, spike-rate adaptation (SRA) conductance, gSRA, and external feed-forward input synaptic conductance, gext. The membrane potential, V, follows the dynamics

where τm is the membrane time constant, EL is the leak reversal potential, EE is the excitatory synapse reversal potential, EI is the inhibitory synapse reversal potential, ESRA is the SRA reversal potential, and Eext is the external input reversal potential. When the membrane potential reaches the threshold Vth, a spike is emitted and the membrane potential is reset to Vreset.

The changes in SRA conductance and all synaptic conductances follow

to produce exponential decay between spikes for any conductance i. A step increase in conductance occurs at the time of each spike by an amount corresponding to the connection strength for synapses or by δSRA for gSRA.

Network structure

We simulated networks of n = 500 neurons, of which 75% were excitatory. Excitatory neurons were randomly, independently assigned membership to each of nc clusters in the network. First, each neuron was randomly assigned membership to one of the clusters. Then, each cluster was assigned a number—nE (μc − 1)/nc rounded to the nearest integer—of additional randomly selected neurons such that each cluster had identical numbers of neurons, nE,clust. = nE (μc/nc), and mean cluster participation, μc, reached its goal value.

E-to-E recurrent connections were randomly assigned on a cluster-wise basis, where only neurons that shared membership in a cluster could be connected. The within-cluster connection probability was configured such that the network exhibited a desired global E-to-E connection probability pc. Given the total number of possible connections between excitatory neurons is Ctot= nE (nE − 1) and the total number of possible connections between excitatory neurons within all clusters is Cclust = nE,clust (nE,clust − 1) nc, we calculated the within-cluster connection probability as pc (Ctot/Cclust). That is, given the absence of connections between clusters (clusters were coupled by the overlap of cells) the within-cluster connection probability was greater than pc so as to generate the desired total number of connections equal to pcCtot.

All E-to-I and I-to-E connections were independent of cluster membership and existed with a probability . There were no I-to-I connections. pc, nc, and μc were varied for some simulations. Except where specified otherwise, all parameters took the fiducial value shown in the table below.

The network visualization in Figure 1c was plotted based on the first 2 dimensions of a t-distributed stochastic neighbor embedding of the connectivity between excitatory cells using the MATLAB function tsne. The feature vector for each excitatory cell was the binary vector indicating the presence of both input and output connections.

Network inputs

All excitatory neurons in the network received three different feed-forward inputs (Figure 1b). Two inputs were spatially modulated, with rates that peaked at either end of the track and linearly varied across the track to reach zero at the opposite end. One input was a context cue that was position independent. All excitatory cells received unique Poisson spike trains from each of the three inputs at their position-dependent rates. Inhibitory cells received only the context input.

The connection strength of each feed-forward input to each neuron was determined by an independent and a cluster-specific factor.

First, strengths were randomly drawn from a log-normal distribution eμ+σ 𝒩, where 𝒩 is a zero-mean, unit variance Normal distribution, and σ mean strength Win and standard deviation σin for the location cues, with σin replaced by σcontext for the context cue. Each environment and the sleep session had unique context cue input weights. For model simplicity, the mean input strength Win for all inputs was kept the same for both E and I cells in both the awake and sleep conditions, but the strength of the resulting context input was then scaled by some factor fx for each of the 4 cases to accommodate for the presence, or lack thereof, of the additional current input from the location cues. These scaling factors were set at a level that generated appropriate levels of population activity. During simulation of linear track traversal, the context cue to excitatory cells was scaled down by fE-awake to compensate for the added excitatory drive of the location cue inputs, and the context cue input to I cells was not changed (fI-awake = 1). During sleep simulation, the context cue input to E cells was not scaled (fE-awake = 1) but the context cue input to I cells was scaled down by fI-sleep.

Second, to incorporate cluster-dependent spatial information, a small (≤ 4%) location cue bias was added to the randomly drawn feed-forward weights based on each neuron’s cluster membership. For each environment, the clusters were randomly shuffled and assigned a normalized rank bias value, such that the first cluster had a bias of -1 (corresponding to a rightward cue preference) and the last cluster had a bias of +1 (leftward cue preference). A neuron’s individual bias was calculated as the mean bias of all clusters it belonged to, multiplied by the scaling factor σbias. The left cue weight for each neuron was then scaled by 1 plus its bias, and the right cue weight was scaled by 1 minus its bias. In this way, the feed-forward input tuning was biased based on the mean rank of a neuron’s cluster affiliations for each environment. The addition of this bias produced correlations in cells’ spatial tunings based on cluster membership, but, importantly, this did not affect any aspect of the sleep simulations of preplay, nor did it lead to high correlations of place-field maps between environments (Figure 7b).

Simulation

For a given parameter set, we generated 10 random networks. We simulated each network for one sleep session of 120 s and for five 2-s long traversals of each of the two linear trajectories on each track. For analysis comparing place-field reliability, we simulated 10 traversals of each trajectory.

Place field analysis

Place-field rate maps

We followed the methods of Shin et al., 2019 to generate place fields from the spike trains. We calculated for each excitatory cell its trial-averaged occupancy-discounted firing rate in each 2 cm spatial bin of the 1 m long linear track. Note that the occupancy-discounting term is uniform across bins, so it has no impact in our model, because we simulated uniform movement speed. We then smoothed this with a Gaussian kernel with a 4 cm standard deviation. For statistics quantifying place-field properties and for Bayesian decoding, we considered only excitatory cells with place-field peaks exceeding 3 Hz as in Shin et al., 2019.

Place-field specificity

Place-field specificity was defined as 1 minus the fraction of the spatial bins in which the place field’s rate exceeded 25% of its maximum rate (Shin et al., 2019).

Place-field spatial information

The spatial information of each cells’ place field was calculated as

where pi is the probability of being in spatial bin i, ri is the place field’s rate in spatial bin i, and is the mean rate of the place field (Sheintuch et al., 2023). Given the division of the track into 50 spatial bins, spatial information could vary between 0 for equal firing in all bins and log2(50) ≅ 5.6 for firing in only a single bin. Spatial information of 1 is equivalent, for example, to equal firing in exactly one half of the bins and no firing elsewhere.

Distribution of peaks

We used two measures to quantify the extent to which place-field peaks were uniformly distributed across the track. In our first measure, we calculated the Kullback-Leibler divergence of the distribution of peaks from a uniform distribution, as

where is the fraction of cells with peak firing rates in the ith, spatial bin and is 1/50, i. e., the fraction expected from a uniform distribution (Sheintuch et al., 2023). Similarly, the range for spatial information, DKL is bounded between zero for a perfectly uniform distribution of peaks and log2(50) ≅ 5.6 if all peaks were in a single bin. DKL of 1 is equivalent, for example, to all peaks being uniformly spread over one half of the bins in the track.

For our second measure, we calculated the fraction of place cells whose peak firing rate was in the central third of the track. Since inputs providing spatial information only peaked at the boundaries of the track, the central third was ubiquitously the most depleted of high firing rates.

Place-field map correlations

To compare the similarity of place fields across different trajectories, we calculated the correlation between the place-field rate maps of each pair of trajectories. For each spatial bin, we calculated the Pearson correlation coefficient between the vector of the population place-field rates of the two trajectories. We then averaged the correlation coefficients across all spatial bins to get the correlation between the two trajectories.

PBE detection

We detected candidate preplay events in the simulated data by identifying population-burst events (PBEs). During the simulated sleep period, we calculated the mean rate of the population of excitatory cells, which defines the population rate, smoothed with a Gaussian kernel (15 ms standard deviation). We then detected PBEs as periods of time when the population rate exceeded 1 standard deviation above the mean population rate for at least 30 ms. We also required the peak population rate to exceed 0.5 Hz (corresponding to 5-6 spikes per 30ms among excitatory cells) in order for the rate fluctuation to qualify as a PBE. We then combined PBEs into a single event if their start and end times were separated by less than 10 ms.

Sharp-wave ripple detection

Because of the reduced number of recorded cells relative to the simulated data, we detected candidate events in the Shin et al., 2019 data with a method that incorporated the ripple band oscillation power in the local field potential (LFP) in addition to the population spiking activity. We first calculated the smoothed firing rate for each excitatory neuron by convolving its spikes with a Gaussian kernel (100 ms standard deviation) and capping at 1 to prevent bursting dominance. We then computed the z-scored population firing rate from the capped, smoothed single-neuron rates. Additionally, we calculated the z-scored, ripple-filtered envelope of the tetrode-averaged LFP. We then summed these two z-scores and detected peaks that exceeded 6 for at least 10 ms and exceeded the neighboring regions by at least 6 (MinPeakHeight, MinPeakWidth, and MinPeakProminence of the MATLAB function findpeaks, respectively). Candidate events were defined as periods around detected peaks, spanning from when the z-score sum first dipped below 0 for at least 5 ms before the peak to after the peak when it again dipped below 0 for at least 5 ms. We additionally required that the animal be immobile during the event.

Bayesian decoding

We performed Bayesian decoding of candidate preplay events following the methods of Shin et al., 2019. We performed decoding on all candidate events that had at least 5 active cells and exceeded at least 50 ms in duration. Spikes in the event were binned into 10 ms time bins. We decoded using the place fields for each trajectory independently. The description provided below is for the decoding using the place fields of one particular trajectory.

For each time bin of each event, we calculated the location on the track represented by the neural spikes based on the place fields of the active cells using a memoryless Bayesian decoder

where P(x\s) is the probability of the animal being in spatial bin x given the set of spikes s that occurred in the time bin, P(s\x) is the probability of the spikes s given the animal is in spatial bin x (as given by the place fields), P(x) is the prior probability of the animal being in spatial bin x, and P(s) is the probability of the spikes s.

We assumed a uniform prior probability of position, P(x). We assumed that the N cells firing during the event acted as independent Poisson processes in order to calculate

where τ is the time bin window duration (10 ms), ri(x) is the place-field rate of cell i in spatial bin x and si is the number of spikes from cell i in the time bin.

This allows us to calculate the posterior probability of position for each time bin as

where C is a normalization constant, which accounts for the position-independent term, P(s).

Bayesian decoding statistical analyses

We analyzed the significance of preplay using the methods of Farooq et al., 2019 (see also Silva et al., 2015). We computed two measures of the sequence quality of each decoded event: the event’s absolute weighted correlation and its jump distance. The absolute weighted correlation is the absolute weighted Pearson’s correlation of decoded position across the event’s time bins. For each decoded event, we calculate the weighted correlation between space and time with MATLAB’s fitlm function using the decoded probability in each space-time bin (10 ms by 2 cm) as the weight for the corresponding location in the correlation. The absolute value of the weighted correlation is used in order to account for both forward and reverse preplay. The jump distance is the maximum of the distance between the positions of peak probability for any two adjacent 10-ms time bins in the event, quantified as fraction of the track length.

For each event, we generated 100 shuffled events by randomly permuting the order of the 10-ms time bins. We then calculated the weighted correlation and jump distance for each shuffled event in the same manner as for the actual events. For each simulated parameter set, we combined all events from the 10 simulated networks.

Following the methods of Farooq et al., 2019, we calculated the statistical significance of the population of preplay events using two different methods. First, we used the Kolmogorov-Smirnov (KS) test to compare the distributions of absolute weighted correlations obtained from the actual events and the shuffled events (Figure 4a, c).

Second, we used a bootstrap test to compare the fraction of high-quality events—defined as having both high absolute weighted correlations and low maximum jump distance— relative to shuffles (Figure 4b,d). To perform the bootstrap test, we created a grid of thresholds for minimum absolute weighted correlation and maximum jump distance, and for each combination of thresholds we calculated the fraction of actual events that exceeded the minimum absolute weighted correlation threshold and did not exceed the maximum jump distance threshold. Then, we generated 100 data sets of shuffled events by randomly permuting the order of the 10-ms time bins for each actual event and calculated the fraction of events meeting the same pairs of thresholds for each shuffled data set. The p-value of the fraction of high-quality events was then calculated as the fraction of shuffled data sets with a higher fraction of high-quality events.

To test the significance of each event’s absolute weighted correlation individually, we calculated the event’s p-value as the fraction of the event’s own shuffles that had a higher absolute weighted correlation than the un-shuffled event (Figure 4f, bottom left).

The spatial entropy H of a decoded event was calculated as the mean over its time bins of the entropy of the decoded position probability in each time bin, using the equation

for each time bin, where pi is the decoded position probability for spatial bin i.

Small-world index

The small-world index (SWI) was calculated following the method of Neal, 2015 (see also Neal, 2017). It was defined as

where L is the mean path distance and C is the clustering coefficient of the network. We calculate L as the mean over all ordered pairs of excitatory cells of the shortest directed path length from the first to the second cell. We calculate C as the ratio of the number of all triplets of excitatory cells that are connected in either direction over the number of all triplets that could form, following the methods of Fagiolo, 2007 for directed graphs. Ll and Cl are the expected values for a one-dimensional ring lattice network with the same size and connection probability (in which connections are local such that there are no connections between cells with a greater separation on the ring than that of any pairs without a connection). And Lr and Cr are the expected values for a random network of the same size and connection probability. A network with a high SWI index is therefore a network with both a high clustering coefficient, similar to a ring lattice network, and small mean path length, similar to a random network.

For directed graphs of size n, average degree k, and global connection probability p

Cr = p (Fagiolo, 2007),

(Fronczak et al., 2004),

(Neal et al., 2015)

(Neal et al., 2015; Fronczak et al., 2004)

where γ is the Euler-Mascheroni constant.

Experimental data

Electrophysiological data was reanalyzed from the hippocampal CA1 recordings first published in Shin et al., 2019. All place-field data (Figure 3a) came from the six rats’ first experience on the W-track spatial alternation task. All preplay data (Figure 4a,b) came from the six rats’ first sleep-box session, which lasted 20-30 minutes and occurred immediately before their first experience on the W-track.

Code

Simulations and analysis were performed in MATLAB with custom code. Code available at https://github.com/primon23/Preplay_paper.

Supplemental figures

Example preplay events from the Shin et al., 2019 data

Example preplay events. Same as Figure 2f but for events from the hipopcampal data from Shin et al., 2019. The height of each plot spans the length of the trajectory used for decoding, divided into 2 cm spatial bins. The width of each plot spans the duration of the detected event, divided into 10 ms time bins. Probability is show in color.

Significant preplay can typically be identified with as few as 50 cells

(a-c) Results from performing the same Bayesian decoding on the same simulated population burst events (PBEs) in Figure 4c but using only random subsets of the excitatory cells for performing the decoding analysis. Each circle is the result of an analysis performed on one random subset of the cells. 25 random subsets were analyzed for each analyzed cell count. The subset sizes are logarithmically spaced. Black lines show the median value. The variability at N=375 is due to the variation in the randomness of the time-bin shuffles. (a) Number of events meeting the inclusion criterion for decoding analysis. b) P-value of the KS-test comparing actual vs shuffled event absolute weighted correlations. A majority of the random subsets of 50 cells (17 out of 25) produce preplay p-values below 0.05. (c) Shift in the median absolute weighted correlation of actual events relative to shuffled events.

Preplay statistics by trajectory for Shin et al., 2019 data.

(a) Same as Figure 4a but separated by results from decoding by each of the 4 trajectories of the W-track individually (trajectory 1, center arm to right arm; trajectory 2, right arm to center arm; trajectory 3, center arm to left arm; trajectory 4, left arm to center arm). KS-test for each trajectory: trajectory 1, p=0.0030; trajectory 2, p=0.0028; trajectory 3, p=0.0027; trajectory 4, p=5.461×10-5. ** p<0.01, *** p<0.001. b) Same as Figure 4b but separated by results from decoding by each of the 4 trajectories individually.