deep

package
v1.2.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 7, 2024 License: BSD-3-Clause Imports: 16 Imported by: 20

README

DeepLeabra

GoDoc

Package deep provides the DeepLeabra variant of Leabra, which performs predictive learning by attempting to predict the activation states over the Pulvinar nucleus of the thalamus (in posterior sensory cortex), which are strongly driven phasically every 100 msec by deep layer 5 intrinsic bursting (5IB) neurons that have strong focal (essentially 1-to-1) connections onto the Pulvinar Thalamic Relay Cell (TRC) neurons. The predictions are generated by layer 6 corticothalamic (CT) neurons, which provide numerous weaker projections to these same TRC neurons. See O'Reilly et al. (2020) for the model, and Sherman & Guillery (2006) for details on circuitry.

Computationally, it is important for the CT neurons to reflect the prior burst activation within their home cortical microcolumn, instead of the current superficial layer activation, so that the system is forced to make a genuine prediction instead of just copying the current state. This is achieved using a CTCtxt projection, which operates much like a simple recurrent network (SRN) context layer (e.g., Elman, 1990).

This same corticothalamic circuitry is also important for broader inhibitory competition among cortical areas that cannot practically interact directly in cortex, given the long physical distances. The thalamic reticular nucleus (TRN) integrates excitatory inputs from the CT and TRC neurons, and projects pooled inhibition across multiple spatial scales back to the TRC neurons. These TRC neurons then project back primarily into layer 4 (and more weakly into other layers) and thus convey the attentionally modulated predictive activation back into cortex.

Computationally, it makes sense that attention and prediction are linked: you only predict the subset of information that you're attending to (otherwise it is too overwhelming to predict everything), and prediction helps to focus the attentional spotlight in anticipation of what will happen next, and according to the regularities that predictive learning has discovered (e.g., coherently moving objects).

However, reconciling the attentional and predictive roles of this circuitry introduces some complexity. The attentional function depends on a closed-loop connectivity pattern organized around pools of neurons (e.g., hypercolumns), such that CT -> TRN -> TRC -> Cortex is all properly aligned. This is generally how the pulvinar is organized topographically (Shipp, 2003). However, from a predictive-learning perspective it is simpler to coordinate the TRC according to its driver 5IB projections so you can directly measure how well the prediction matches these driver inputs. Furthermore, each cortical pool is involved in predicting across multiple different drivers across multiple layers in the hierarchy (e.g., IT cortex predicts V1, V2, V4 drivers), so the closed-loop TRC is driven by diverse inputs that need to be coordinated, whereas the driver-based organization can handle this more simply with multiple projections. This is how we organized things computationally in previous implementations.

In the current implementation, we use the closed-loop connectivity, with Drivers list on TRCLayer that manages the aggregation of driver inputs from multiple different layers, which also handles the conversion between the group-level topologies of these layers (which was handled by projection pattern logic previously).

 V1Super -----> V2 --Ctxt--> CT
   |            ^             |\
 Burst          |   (pool     | v
   |            |    loop)    | TRN
   v            |             | /
 Pred <------> TRC <-----------o (inhib)

This package has 3 primary specialized Layer types:

  • SuperLayer: implements the superficial layer neurons, which function just like standard leabra.Layer neurons, while also directly computing the Burst activation signal that reflects the deep layer 5IB bursting activation, via thresholding of the superficial layer activations (Bursting is thought to have a higher threshold).

  • CTLayer: implements the layer 6 regular spiking CT corticothalamic neurons that project into the thalamus. They receive the Burst activation via a CTCtxtPrjn projection type, typically once every 100 msec, and integrate that in the CtxtGe value, which is added to other excitatory conductance inputs to drive the overall activation (Act) of these neurons. Due to the bursting nature of the Burst inputs, this causes these CT layer neurons to reflect what the superficial layers encoded on the previous timestep -- thus they represent a temporally-delayed context state.

CTLayer can send Context via self projections to reflect the extensive deep-to-deep lateral connectivity that provides more extensive temporal context information.

  • TRCLayer: implement the TRC (Pulvinar) neurons, upon which the prediction generated by CTLayer projections is projected in the minus phase. This is computed via standard Act-driven projections that integrate into standard Ge excitatory input in TRC neurons. The 5IB Burst-driven plus-phase "outcome" activation state is driven by direct access to the corresponding driver SuperLayer (not via standard projection mechanisms). Wiring diagram:

Timing

The alpha-cycle quarter(s) when Burst is updated and broadcast is set in BurstQtr (defaults to Q4, can also be e.g., Q2 and Q4 for beta frequency updating). During this quarter(s), the Burst value is computed in SuperLayer, and this is continuously accessed by TRCLayer neurons to drive plus-phase outcome states.

At the end of the burst quarter(s), in the QuarterFinal method, CTCtxt projections convey the Burst signal from Super to CTLayer neurons, where it is integrated into the Ctxt value representing the temporally-delayed context information.

TRN Attention and Learning

The basic anatomical facts of the TRN strongly constrain its role in attentional modulation. With the exception of inhibitory projections from the GPi / SNr (BG output nuclei), it exclusively receives excitatory inputs from CT projections, and a weaker excitatory feedback projection from the TRC neurons that they in turn send GABA inhibition to. Thus, their main function appears to be providing pooled feedback inhibition to the TRC, with various levels of pooling on the input side and on the diffusion on the output side. Computationally, this pooling seems ideally situated to enable inhibitory competition to operate across multiple different scales.

Given the pool-level organization of the CT -> TRC -> Cortex loops, the pool should be the finest grain of this competition. Thus, a contribution of the TRN is supporting layer-level inhibition across pools -- but this is already implemented with the layer level inhibition in standard Leabra. Critically, if we assume that inhibition is generally hierarchically organized, then the broader level of inhibition would be at the between-layer level. Thus, the TRN implementation just supports this broadest level of inhibition, providing a visual representation of the layers and their respective inhibition levels.

In addition, the TRC layer itself supports a gaussian topographic level of inhibition among pools, that represents a finer grained inhibition that would be provided by the TRN.

Perhaps the most important contribution that the TRC / TRN can provide is a learning modulation at the pool level, as a function of inhibition.

Compounding: Getting the Good without too much Lock-In

It is relatively easy to make something that locks in a given attentional pattern, but a problem arises when you then need to change things in response to new inputs -- often the network suffers from too much attentional lock-in...

Reynolds & Heeger (2009)

Folded Feedback (Grossberg, 1999)

Grossberg (1999) emphasized that it can be beneficial for attention to modulate the inputs to a given area, so it gets "folded" into the input stream. Another way of thinking about this is that it is more effective to block a river further upstream, before further "compounding" effects might set in, rather than waiting until everything has piled in and you have to push against a torrent. This is achieved by modulating the layer 4 inputs to an area, which happens by modulating forward projections.

Extensions

See pbwm for info about Prefrontal-cortex Basal-ganglia Working Memory (PBWM) model that builds on this deep framework to support gated working memory.

References

  • Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211.

  • O’Reilly, R. C., Russin, J. L., Zolfaghar, M., & Rohrlich, J. (2020). Deep Predictive Learning in Neocortex and Pulvinar. ArXiv:2006.14800 [q-Bio]. http://arxiv.org/abs/2006.14800

  • Sherman, S. M., & Guillery, R. W. (2006). Exploring the Thalamus and Its Role in Cortical Function. MIT Press. http://www.scholarpedia.org/article/Thalamus

  • Shipp, S. (2003). The functional logic of cortico-pulvinar connections. Philosophical Transactions of the Royal Society of London B, 358(1438), 1605–1624. http://www.ncbi.nlm.nih.gov/pubmed/14561322

Documentation

Overview

Package deep provides the DeepLeabra variant of Leabra, which performs predictive learning by attempting to predict the activation states over the Pulvinar nucleus of the thalamus (in posterior sensory cortex), which are driven phasically every 100 msec by deep layer 5 intrinsic bursting (5IB) neurons that have strong focal (essentially 1-to-1) connections onto the Pulvinar Thalamic Relay Cell (TRC) neurons.

This package has 3 specialized Layer types:

  • SuperLayer: implements the superficial layer neurons, which function just like standard leabra.Layer neurons, while also directly computing the Burst activation signal that reflects the deep layer 5IB bursting activation, via thresholding of the superficial layer activations (Bursting is thought to have a higher threshold).

  • CTLayer: implements the layer 6 regular spiking CT corticothalamic neurons that project into the thalamus. They receive the Burst activation via a CTCtxtPrjn projection type, typically once every 100 msec, and integrate that in the CtxtGe value, which is added to other excitatory conductance inputs to drive the overall activation (Act) of these neurons. Due to the bursting nature of the Burst inputs, this causes these CT layer neurons to reflect what the superficial layers encoded on the *previous* timestep -- thus they represent a temporally-delayed context state.

    CTLayer can send Context via self projections to reflect the extensive deep-to-deep lateral connectivity that provides more extensive temporal context information.

  • TRCLayer: implement the TRC (Pulvinar) neurons, upon which the prediction generated by CTLayer projections is projected in the minus phase. This is computed via standard Act-driven projections that integrate into standard Ge excitatory input in TRC neurons. The 5IB Burst-driven plus-phase "outcome" activation state is driven by direct access to the corresponding driver SuperLayer (not via standard projection mechanisms).

Wiring diagram:

 SuperLayer --Burst--> TRCLayer
   |                      ^
CTCtxt          /- Back -/
  |            /
  v           /
CTLayer -----/  (typically only for higher->lower)

Timing:

The alpha-cycle quarter(s) when Burst is updated and broadcast is set in BurstQtr (defaults to Q4, can also be e.g., Q2 and Q4 for beta frequency updating). During this quarter(s), the Burst value is computed in SuperLayer, and this is continuously accessed by TRCLayer neurons to drive plus-phase outcome states.

At the *end* of the burst quarter(s), in the QuarterFinal method, CTCtxt projections convey the Burst signal from Super to CTLayer neurons, where it is integrated into the Ctxt value representing the temporally-delayed context information.

Index

Constants

View Source
const (
	// CT are layer 6 corticothalamic projecting neurons, which drive predictions
	// in TRC (Pulvinar) via standard projections.
	CT emer.LayerType = emer.LayerTypeN + iota

	// TRC are thalamic relay cell neurons in the Pulvinar / MD thalamus,
	// which alternately reflect predictions driven by Deep layer projections,
	// and actual outcomes driven by Burst activity from corresponding
	// Super layer neurons that provide strong driving inputs to TRC neurons.
	TRC
)
View Source
const (
	// CTCtxt are projections from Superficial layers to CT layers that
	// send Burst activations drive updating of CtxtGe excitatory conductance,
	// at end of a DeepBurst quarter.  These projections also use a special learning
	// rule that takes into account the temporal delays in the activation states.
	// Can also add self context from CT for deeper temporal context.
	CTCtxt emer.PrjnType = emer.PrjnTypeN + iota
)

The DeepLeabra prjn types

Variables

View Source
var (
	// NeuronVars are for full list across all deep Layer types
	NeuronVars = []string{"Burst", "BurstPrv", "Attn", "CtxtGe"}

	// SuperNeuronVars are for SuperLayer directly
	SuperNeuronVars = []string{"Burst", "BurstPrv", "Attn"}

	SuperNeuronVarsMap map[string]int

	// NeuronVarsAll is full integrated list across inherited layers and NeuronVars
	NeuronVarsAll []string
)
View Source
var KiT_CTCtxtPrjn = kit.Types.AddType(&CTCtxtPrjn{}, PrjnProps)
View Source
var KiT_CTLayer = kit.Types.AddType(&CTLayer{}, LayerProps)
View Source
var KiT_LayerType = kit.Enums.AddEnumExt(emer.KiT_LayerType, LayerTypeN, kit.NotBitFlag, nil)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_PrjnType = kit.Enums.AddEnumExt(emer.KiT_PrjnType, PrjnTypeN, kit.NotBitFlag, nil)
View Source
var KiT_SuperLayer = kit.Types.AddType(&SuperLayer{}, LayerProps)
View Source
var KiT_TRCLayer = kit.Types.AddType(&TRCLayer{}, LayerProps)
View Source
var KiT_TRNLayer = kit.Types.AddType(&TRNLayer{}, leabra.LayerProps)
View Source
var KiT_TopoInhibLayer = kit.Types.AddType(&TopoInhibLayer{}, LayerProps)
View Source
var LayerProps = ki.Props{
	"EnumType:Typ": KiT_LayerType,
	"ToolBar": ki.PropSlice{
		{"Defaults", ki.Props{
			"icon": "reset",
			"desc": "return all parameters to their intial default values",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"LesionNeurons", ki.Props{
			"icon": "close",
			"desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)",
			"Args": ki.PropSlice{
				{"Proportion", ki.Props{
					"desc": "proportion (0 -- 1) of neurons to lesion",
				}},
			},
		}},
		{"UnLesionNeurons", ki.Props{
			"icon": "reset",
			"desc": "Un-Lesion (reset the Off flag) for all neurons in the layer",
		}},
	},
}

LayerProps are required to get the extended EnumType

View Source
var NetworkProps = leabra.NetworkProps
View Source
var PrjnProps = ki.Props{
	"EnumType:Typ": KiT_PrjnType,
}

Functions

func AddDeep2D added in v1.1.4

func AddDeep2D(nt *leabra.Network, name string, shapeY, shapeX int) (super, ct, trc emer.Layer)

AddDeep2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT, and TRC Pulvinar for Super (P suffix). TRC projects back to Super and CT layers, type = Back, class = FmPulv CT is placed Behind Super, and Pulvinar behind CT. Drivers must be added to the TRC layer, and it must be sized appropriately for those drivers.

func AddDeep2DFakeCT added in v1.1.26

func AddDeep2DFakeCT(nt *leabra.Network, name string, shapeY, shapeX int) (super, ct, trc emer.Layer)

AddDeep2DFakeCT adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with FAKE CTCtxtPrjn OneToOne projection from Super to CT, and TRC Pulvinar for Super (P suffix). TRC projects back to Super and CT layers, type = Back, class = FmPulv CT is placed Behind Super, and Pulvinar behind CT. Drivers must be added to the TRC layer, and it must be sized appropriately for those drivers. This does NOT make a CTCtxtPrjn -- instead makes a regular leabra.Prjn -- for testing!

func AddDeep2DPy added in v1.1.15

func AddDeep2DPy(nt *leabra.Network, name string, shapeY, shapeX int) []emer.Layer

AddDeep2DPy adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn Full projection from Super to CT, and TRC Pulvinar for Super (P suffix). TRC projects back to Super and CT layers, type = Back, class = FmPulv CT is placed Behind Super, and Pulvinar behind CT. Drivers must be added to the TRC layer, and it must be sized appropriately for those drivers. Py is Python version, returns layers as a slice

func AddDeep4D added in v1.1.4

func AddDeep4D(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) (super, ct, trc emer.Layer)

AddDeep4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT, and TRC Pulvinar for Super (P suffix). TRC projects back to Super and CT layers, also PoolOneToOne, class = FmPulv CT is placed Behind Super, and Pulvinar behind CT. Drivers must be added to the TRC layer, and it must be sized appropriately for those drivers.

func AddDeep4DFakeCT added in v1.1.26

func AddDeep4DFakeCT(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) (super, ct, trc emer.Layer)

AddDeep4DFakeCT adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with FAKE CTCtxtPrjn OneToOne projection from Super to CT, and TRC Pulvinar for Super (P suffix). TRC projects back to Super and CT layers, also PoolOneToOne, class = FmPulv CT is placed Behind Super, and Pulvinar behind CT. Drivers must be added to the TRC layer, and it must be sized appropriately for those drivers. This does NOT make a CTCtxtPrjn -- instead makes a regular leabra.Prjn -- for testing!

func AddDeep4DPy added in v1.1.15

func AddDeep4DPy(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) []emer.Layer

AddDeep4DPy adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn PoolOneToOne projection from Super to CT, and TRC Pulvinar for Super (P suffix). TRC projects back to Super and CT layers, also PoolOneToOne, class = FmPulv CT is placed Behind Super, and Pulvinar behind CT. Drivers must be added to the TRC layer, and it must be sized appropriately for those drivers. Py is Python version, returns layers as a slice

func AddDeepNoTRC2D added in v1.1.4

func AddDeepNoTRC2D(nt *leabra.Network, name string, shapeY, shapeX int) (super, ct emer.Layer)

AddDeepNoTRC2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT, and NO TRC Pulvinar. CT is placed Behind Super.

func AddDeepNoTRC2DPy added in v1.1.15

func AddDeepNoTRC2DPy(nt *leabra.Network, name string, shapeY, shapeX int) []emer.Layer

AddDeepNoTRC2DPy adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn Full projection from Super to CT, and NO TRC Pulvinar. CT is placed Behind Super. Py is Python version, returns layers as a slice

func AddDeepNoTRC4D added in v1.1.4

func AddDeepNoTRC4D(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) (super, ct emer.Layer)

AddDeepNoTRC4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT, and NO TRC Pulvinar. CT is placed Behind Super.

func AddDeepNoTRC4DPy added in v1.1.15

func AddDeepNoTRC4DPy(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) []emer.Layer

AddDeepNoTRC4DPy adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn PoolOneToOne projection from Super to CT, and NO TRC Pulvinar. CT is placed Behind Super. Py is Python version, returns layers as a slice

func ConnectCtxtToCT added in v1.1.2

func ConnectCtxtToCT(nt *leabra.Network, send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectCtxtToCT adds a CTCtxtPrjn from given sending layer to a CT layer Use ConnectSuperToCT for main projection from corresponding superficial layer.

func ConnectCtxtToCTFake added in v1.1.26

func ConnectCtxtToCTFake(nt *leabra.Network, send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectCtxtToCTFake adds a FAKE CTCtxtPrjn from given sending layer to a CT layer This does NOT make a CTCtxtPrjn -- instead makes a regular leabra.Prjn -- for testing!

func ConnectSuperToCT added in v1.1.21

func ConnectSuperToCT(nt *leabra.Network, send, recv emer.Layer) emer.Prjn

ConnectSuperToCT adds a CTCtxtPrjn from given sending Super layer to a CT layer This automatically sets the FmSuper flag to engage proper defaults, uses a OneToOne prjn pattern, and sets the class to CTFmSuper

func ConnectSuperToCTFake added in v1.1.26

func ConnectSuperToCTFake(nt *leabra.Network, send, recv emer.Layer) emer.Prjn

ConnectSuperToCTFake adds a FAKE CTCtxtPrjn from given sending Super layer to a CT layer uses a OneToOne prjn pattern, and sets the class to CTFmSuper. This does NOT make a CTCtxtPrjn -- instead makes a regular leabra.Prjn -- for testing!

func DriveAct added in v1.1.4

func DriveAct(dni int, dly *leabra.Layer, sly *SuperLayer, issuper bool) float32

func MaxPoolActAvg added in v1.1.4

func MaxPoolActAvg(ly *leabra.Layer) float32

MaxPoolActAvg returns the max Inhib.Act.Avg value across pools

func SuperNeuronVarIdxByName added in v1.1.4

func SuperNeuronVarIdxByName(varNm string) (int, error)

SuperNeuronVarIdxByName returns the index of the variable in the SuperNeuron, or error

func UnitsSize added in v1.1.4

func UnitsSize(ly *leabra.Layer) (x, y int)

UnitsSize returns the dimension of the units, either within a pool for 4D, or layer for 2D

Types

type BurstParams added in v1.0.0

type BurstParams struct {

	// Quarter(s) when bursting occurs -- typically Q4 but can also be Q2 and Q4 for beta-frequency updating.  Note: this is a bitflag and must be accessed using its Set / Has etc routines, 32 bit versions.
	BurstQtr leabra.Quarters `` /* 206-byte string literal not displayed */

	// [def: 0.1,0.2,0.5] [max: 1] Relative component of threshold on superficial activation value, below which it does not drive Burst (and above which, Burst = Act).  This is the distance between the average and maximum activation values within layer (e.g., 0 = average, 1 = max).  Overall effective threshold is MAX of relative and absolute thresholds.
	ThrRel float32 `` /* 353-byte string literal not displayed */

	// [def: 0.1,0.2,0.5] [min: 0] [max: 1] Absolute component of threshold on superficial activation value, below which it does not drive Burst (and above which, Burst = Act).  Overall effective threshold is MAX of relative and absolute thresholds.
	ThrAbs float32 `` /* 246-byte string literal not displayed */
}

BurstParams determine how the 5IB Burst activation is computed from standard Act activation values in SuperLayer -- thresholded.

func (*BurstParams) Defaults added in v1.0.0

func (db *BurstParams) Defaults()

type CTCtxtPrjn added in v1.1.2

type CTCtxtPrjn struct {
	leabra.Prjn // access as .Prjn

	// if true, this is the projection from corresponding Superficial layer -- should be OneToOne prjn, with Learn.Learn = false, WtInit.Var = 0, Mean = 0.8 -- these defaults are set if FmSuper = true
	FmSuper bool `` /* 200-byte string literal not displayed */

	// local per-recv unit accumulator for Ctxt excitatory conductance from sending units -- not a delta -- the full value
	CtxtGeInc []float32 `desc:"local per-recv unit accumulator for Ctxt excitatory conductance from sending units -- not a delta -- the full value"`
}

CTCtxtPrjn is the "context" temporally-delayed projection into CTLayer, (corticothalamic deep layer 6) where the CtxtGe excitatory input is integrated only at end of Burst Quarter. Set FmSuper for the main projection from corresponding Super layer.

func (*CTCtxtPrjn) Build added in v1.1.2

func (pj *CTCtxtPrjn) Build() error

func (*CTCtxtPrjn) DWt added in v1.1.2

func (pj *CTCtxtPrjn) DWt()

DWt computes the weight change (learning) for Ctxt projections

func (*CTCtxtPrjn) Defaults added in v1.1.2

func (pj *CTCtxtPrjn) Defaults()

func (*CTCtxtPrjn) InitGInc added in v1.1.2

func (pj *CTCtxtPrjn) InitGInc()

func (*CTCtxtPrjn) PrjnTypeName added in v1.1.2

func (pj *CTCtxtPrjn) PrjnTypeName() string

func (*CTCtxtPrjn) RecvCtxtGeInc added in v1.1.2

func (pj *CTCtxtPrjn) RecvCtxtGeInc()

RecvCtxtGeInc increments the receiver's CtxtGe from that of all the projections

func (*CTCtxtPrjn) RecvGInc added in v1.1.2

func (pj *CTCtxtPrjn) RecvGInc()

RecvGInc: disabled for this type

func (*CTCtxtPrjn) SendCtxtGe added in v1.1.2

func (pj *CTCtxtPrjn) SendCtxtGe(si int, dburst float32)

SendCtxtGe sends the full Burst activation from sending neuron index si, to integrate CtxtGe excitatory conductance on receivers

func (*CTCtxtPrjn) SendGDelta added in v1.1.2

func (pj *CTCtxtPrjn) SendGDelta(si int, delta float32)

SendGDelta: disabled for this type

func (*CTCtxtPrjn) Type added in v1.1.2

func (pj *CTCtxtPrjn) Type() emer.PrjnType

func (*CTCtxtPrjn) UpdateParams added in v1.1.2

func (pj *CTCtxtPrjn) UpdateParams()

type CTLayer added in v1.1.2

type CTLayer struct {
	TopoInhibLayer // access as .TopoInhibLayer

	// Quarter(s) when bursting occurs -- typically Q4 but can also be Q2 and Q4 for beta-frequency updating.  Note: this is a bitflag and must be accessed using its Set / Has etc routines, 32 bit versions.
	BurstQtr leabra.Quarters `` /* 206-byte string literal not displayed */

	// slice of context (temporally delayed) excitatory conducances.
	CtxtGes []float32 `desc:"slice of context (temporally delayed) excitatory conducances."`
}

CTLayer implements the corticothalamic projecting layer 6 deep neurons that project to the TRC pulvinar neurons, to generate the predictions. They receive phasic input representing 5IB bursting via CTCtxtPrjn inputs from SuperLayer and also from self projections.

func AddCTLayer2D added in v1.1.2

func AddCTLayer2D(nt *leabra.Network, name string, nNeurY, nNeurX int) *CTLayer

AddCTLayer2D adds a CTLayer of given size, with given name.

func AddCTLayer4D added in v1.1.2

func AddCTLayer4D(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *CTLayer

AddCTLayer4D adds a CTLayer of given size, with given name.

func (*CTLayer) Build added in v1.1.2

func (ly *CTLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*CTLayer) Class added in v1.1.2

func (ly *CTLayer) Class() string

func (*CTLayer) CtxtFmGe added in v1.1.2

func (ly *CTLayer) CtxtFmGe(ltime *leabra.Time)

CtxtFmGe integrates new CtxtGe excitatory conductance from projections, and computes overall Ctxt value, only on Deep layers. This must be called at the end of the DeepBurst quarter for this layer, after SendCtxtGe.

func (*CTLayer) Defaults added in v1.1.2

func (ly *CTLayer) Defaults()

func (*CTLayer) GFmInc added in v1.1.2

func (ly *CTLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*CTLayer) InitActs added in v1.1.2

func (ly *CTLayer) InitActs()

func (*CTLayer) SendCtxtGe added in v1.1.2

func (ly *CTLayer) SendCtxtGe(ltime *leabra.Time)

SendCtxtGe sends activation over CTCtxtPrjn projections to integrate CtxtGe excitatory conductance on CT layers. This must be called at the end of the Burst quarter for this layer. Satisfies the CtxtSender interface.

func (*CTLayer) UnitVal1D added in v1.1.2

func (ly *CTLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*CTLayer) UnitVarIdx added in v1.1.2

func (ly *CTLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*CTLayer) UnitVarNames added in v1.1.2

func (ly *CTLayer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*CTLayer) UnitVarNum added in v1.1.2

func (ly *CTLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type CtxtSender added in v1.1.2

type CtxtSender interface {
	leabra.LeabraLayer

	// SendCtxtGe sends activation over CTCtxtPrjn projections to integrate
	// CtxtGe excitatory conductance on CT layers.
	// This must be called at the end of the Burst quarter for this layer.
	SendCtxtGe(ltime *leabra.Time)
}

CtxtSender is an interface for layers that implement the SendCtxtGe method (SuperLayer, CTLayer)

type Driver added in v1.1.4

type Driver struct {

	// driver layer
	Driver string `desc:"driver layer"`

	// offset into TRC pool
	Off int `inactive:"-" desc:"offset into TRC pool"`
}

Driver describes the source of driver inputs from cortex into TRC (pulvinar)

type Drivers added in v1.1.4

type Drivers []*Driver

Drivers are a list of drivers

func (*Drivers) Add added in v1.1.4

func (dr *Drivers) Add(laynms ...string)

Add adds new driver(s)

func (*Drivers) AddOne added in v1.1.15

func (dr *Drivers) AddOne(laynm string)

AddOne adds one new driver -- python does not work with varargs

type EPool added in v1.1.4

type EPool struct {

	// layer name
	LayNm string `desc:"layer name"`

	// general scaling factor for how much excitation from this pool
	Wt float32 `desc:"general scaling factor for how much excitation from this pool"`
}

EPool are how to gather excitation across pools

func (*EPool) Defaults added in v1.1.4

func (ep *EPool) Defaults()

type EPools added in v1.1.4

type EPools []*EPool

EPools is a list of pools

func (*EPools) Add added in v1.1.4

func (ep *EPools) Add(laynm string, wt float32) *EPool

Add adds a new epool connection

func (*EPools) Validate added in v1.1.4

func (ep *EPools) Validate(net emer.Network, ctxt string) error

Validate ensures that LayNames layers are valid. ctxt is string for error message to provide context.

type IPool added in v1.1.4

type IPool struct {

	// layer name
	LayNm string `desc:"layer name"`

	// general scaling factor for how much overall inhibition from this pool contributes, in a non-pool-specific manner
	Wt float32 `desc:"general scaling factor for how much overall inhibition from this pool contributes, in a non-pool-specific manner"`

	// scaling factor for how much corresponding pools contribute in a pool-spcific manner, using offsets and averaging across pools as needed to match geometry
	PoolWt float32 `` /* 160-byte string literal not displayed */

	// offset into source, sending layer
	SOff evec.Vec2i `desc:"offset into source, sending layer"`

	// offset into our own receiving layer
	ROff evec.Vec2i `desc:"offset into our own receiving layer"`
}

IPool are how to gather inhibition across pools

func (*IPool) Defaults added in v1.1.4

func (ip *IPool) Defaults()

type IPools added in v1.1.4

type IPools []*IPool

IPools is a list of pools

func (*IPools) Add added in v1.1.4

func (ip *IPools) Add(laynm string, wt float32) *IPool

Add adds a new ipool connection

func (*IPools) Validate added in v1.1.4

func (ip *IPools) Validate(net emer.Network, ctxt string) error

Validate ensures that LayNames layers are valid. ctxt is string for error message to provide context.

type LayerType added in v1.0.0

type LayerType emer.LayerType

LayerType has the DeepLeabra extensions to the emer.LayerType types, for gui

const (
	CT_ LayerType = LayerType(emer.LayerTypeN) + iota
	TRC_
	LayerTypeN
)

gui versions

func StringToLayerType added in v1.0.0

func StringToLayerType(s string) (LayerType, error)

func (LayerType) String added in v1.0.0

func (i LayerType) String() string

type Network

type Network struct {
	leabra.Network
}

deep.Network has parameters for running a DeepLeabra network

func (*Network) AddDeep2D added in v1.1.4

func (nt *Network) AddDeep2D(name string, shapeY, shapeX int) (super, ct, pulv emer.Layer)

AddDeep2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT. Optionally creates a TRC Pulvinar for Super. CT is placed Behind Super, and Pulvinar behind CT if created.

func (*Network) AddDeep2DFakeCT added in v1.1.26

func (nt *Network) AddDeep2DFakeCT(name string, shapeY, shapeX int) (super, ct, pulv emer.Layer)

AddDeep2DFakeCT adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with FAKE CTCtxtPrjn OneToOne projection from Super to CT. Optionally creates a TRC Pulvinar for Super. CT is placed Behind Super, and Pulvinar behind CT if created.

func (*Network) AddDeep4D added in v1.1.4

func (nt *Network) AddDeep4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) (super, ct, pulv emer.Layer)

AddDeep4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT. Optionally creates a TRC Pulvinar for Super. CT is placed Behind Super, and Pulvinar behind CT if created.

func (*Network) AddDeep4DFakeCT added in v1.1.26

func (nt *Network) AddDeep4DFakeCT(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) (super, ct, pulv emer.Layer)

AddDeep4DFakeCT adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with FAKE CTCtxtPrjn OneToOne projection from Super to CT. Optionally creates a TRC Pulvinar for Super. CT is placed Behind Super, and Pulvinar behind CT if created.

func (*Network) AddDeepNoTRC2D added in v1.1.4

func (nt *Network) AddDeepNoTRC2D(name string, shapeY, shapeX int) (super, ct emer.Layer)

AddDeepNoTRC2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn OneToOne projection from Super to CT, and NO TRC Pulvinar. CT is placed Behind Super.

func (*Network) AddDeepNoTRC4D added in v1.1.4

func (nt *Network) AddDeepNoTRC4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) (super, ct emer.Layer)

AddDeepNoTRC4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn PoolOneToOne projection from Super to CT, and NO TRC Pulvinar. CT is placed Behind Super.

func (*Network) CTCtxt added in v1.1.2

func (nt *Network) CTCtxt(ltime *leabra.Time)

CTCtxt sends context to CT layers and integrates CtxtGe on CT layers

func (*Network) ConnectCtxtToCT added in v1.1.2

func (nt *Network) ConnectCtxtToCT(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectCtxtToCT adds a CTCtxtPrjn from given sending layer to a CT layer

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and projections

func (*Network) QuarterFinal

func (nt *Network) QuarterFinal(ltime *leabra.Time)

QuarterFinal does updating after end of a quarter

func (*Network) UnitVarNames added in v1.1.0

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and projections

type PrjnType added in v1.0.0

type PrjnType emer.PrjnType

PrjnType has the DeepLeabra extensions to the emer.PrjnType types, for gui

const (
	CTCtxt_ PrjnType = PrjnType(emer.PrjnTypeN) + iota
	PrjnTypeN
)

gui versions

func StringToPrjnType added in v1.0.0

func StringToPrjnType(s string) (PrjnType, error)

func (PrjnType) String added in v1.0.0

func (i PrjnType) String() string

type SuperLayer added in v1.1.2

type SuperLayer struct {
	TopoInhibLayer // access as .TopoInhibLayer

	// [view: inline] parameters for computing Burst from act, in Superficial layers (but also needed in Deep layers for deep self connections)
	Burst BurstParams `` /* 142-byte string literal not displayed */

	// [view: inline] determine how the TRCLayer activation modulates SuperLayer feedforward excitatory conductances, representing TRC effects on layer V4 inputs (not separately simulated) -- must have a valid layer.
	Attn TRCAttnParams `` /* 215-byte string literal not displayed */

	// slice of super neuron values -- same size as Neurons
	SuperNeurs []SuperNeuron `desc:"slice of super neuron values -- same size as Neurons"`
}

SuperLayer is the DeepLeabra superficial layer, based on basic rate-coded leabra.Layer. Computes the Burst activation from regular activations.

func AddSuperLayer2D added in v1.1.2

func AddSuperLayer2D(nt *leabra.Network, name string, nNeurY, nNeurX int) *SuperLayer

AddSuperLayer2D adds a SuperLayer of given size, with given name.

func AddSuperLayer4D added in v1.1.2

func AddSuperLayer4D(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *SuperLayer

AddSuperLayer4D adds a SuperLayer of given size, with given name.

func (*SuperLayer) ActFmG added in v1.1.4

func (ly *SuperLayer) ActFmG(ltime *leabra.Time)

func (*SuperLayer) Build added in v1.1.2

func (ly *SuperLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*SuperLayer) BurstFmAct added in v1.1.2

func (ly *SuperLayer) BurstFmAct(ltime *leabra.Time)

BurstFmAct updates Burst layer 5IB bursting value from current Act (superficial activation), subject to thresholding.

func (*SuperLayer) BurstPrv added in v1.1.2

func (ly *SuperLayer) BurstPrv()

BurstPrv saves Burst as BurstPrv

func (*SuperLayer) CyclePost added in v1.1.2

func (ly *SuperLayer) CyclePost(ltime *leabra.Time)

CyclePost calls BurstFmAct

func (*SuperLayer) DecayState added in v1.1.2

func (ly *SuperLayer) DecayState(decay float32)

func (*SuperLayer) Defaults added in v1.1.2

func (ly *SuperLayer) Defaults()

func (*SuperLayer) InitActs added in v1.1.2

func (ly *SuperLayer) InitActs()

func (*SuperLayer) QuarterFinal added in v1.1.2

func (ly *SuperLayer) QuarterFinal(ltime *leabra.Time)

QuarterFinal does updating after end of a quarter

func (*SuperLayer) SendCtxtGe added in v1.1.2

func (ly *SuperLayer) SendCtxtGe(ltime *leabra.Time)

SendCtxtGe sends Burst activation over CTCtxtPrjn projections to integrate CtxtGe excitatory conductance on CT layers. This must be called at the end of the Burst quarter for this layer. Satisfies the CtxtSender interface.

func (*SuperLayer) TRCLayer added in v1.1.4

func (ly *SuperLayer) TRCLayer() (*leabra.Layer, error)

TRCLayer returns the TRC layer for attentional modulation

func (*SuperLayer) UnitVal1D added in v1.1.2

func (ly *SuperLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*SuperLayer) UnitVarIdx added in v1.1.2

func (ly *SuperLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*SuperLayer) UnitVarNames added in v1.1.2

func (ly *SuperLayer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*SuperLayer) UnitVarNum added in v1.1.2

func (ly *SuperLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*SuperLayer) UpdateParams added in v1.1.2

func (ly *SuperLayer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer

func (*SuperLayer) ValidateTRCLayer added in v1.1.4

func (ly *SuperLayer) ValidateTRCLayer() error

type SuperNeuron added in v1.1.2

type SuperNeuron struct {

	// 5IB bursting activation value, computed by thresholding regular activation
	Burst float32 `desc:"5IB bursting activation value, computed by thresholding regular activation"`

	// previous bursting activation -- used for context-based learning
	BurstPrv float32 `desc:"previous bursting activation -- used for context-based learning"`

	// attentional signal from TRC layer
	Attn float32 `desc:"attentional signal from TRC layer"`
}

SuperNeuron has the neuron values for SuperLayer

func (*SuperNeuron) VarByIdx added in v1.1.2

func (sn *SuperNeuron) VarByIdx(idx int) float32

type TRCAttnParams added in v1.1.4

type TRCAttnParams struct {

	// is attentional modulation active?
	On bool `desc:"is attentional modulation active?"`

	// minimum act multiplier if attention is 0
	Min float32 `desc:"minimum act multiplier if attention is 0"`

	// name of TRC layer -- defaults to layer name + P
	TRCLay string `desc:"name of TRC layer -- defaults to layer name + P"`
}

TRCAttnParams determine how the TRCLayer activation modulates SuperLayer activations

func (*TRCAttnParams) Defaults added in v1.1.4

func (at *TRCAttnParams) Defaults()

func (*TRCAttnParams) ModVal added in v1.1.4

func (at *TRCAttnParams) ModVal(val float32, attn float32) float32

ModVal returns the attn-modulated value

type TRCLayer added in v1.1.2

type TRCLayer struct {
	TopoInhibLayer // access as .TopoInhibLayer

	// [view: inline] parameters for computing TRC plus-phase (outcome) activations based on Burst activation from corresponding driver neuron
	TRC TRCParams `` /* 141-byte string literal not displayed */

	// name of SuperLayer that sends 5IB Burst driver inputs to this layer
	Drivers Drivers `desc:"name of SuperLayer that sends 5IB Burst driver inputs to this layer"`
}

TRCLayer is the thalamic relay cell layer for DeepLeabra. It has normal activity during the minus phase, as activated by CT etc inputs, and is then driven by strong 5IB driver inputs in the plus phase. For attentional modulation, TRC maintains pool-level correspondence with CT inputs which creates challenges for aligning with driver inputs.

  • Max operation used to integrate across multiple drivers, where necessary, e.g., multiple driver pools map onto single TRC pool (common feedforward theme), *even when there is no logical connection for the i'th unit in each pool* -- to make this dimensionality reduction more effective, using lateral connectivity between pools that favors this correspondence is beneficial. Overall, this is consistent with typical DCNN max pooling organization.
  • Typically, pooled 4D TRC layers should have fewer pools than driver layers, in which case the respective pool geometry is interpolated. Ideally, integer size differences are best (e.g., driver layer has 2x pools vs TRC).
  • Pooled 4D TRC layer should in general not predict flat 2D drivers, but if so the drivers are replicated for each pool.
  • Similarly, there shouldn't generally be more TRC pools than driver pools, but if so, drivers replicate across pools.

func AddTRCLayer2D added in v1.1.2

func AddTRCLayer2D(nt *leabra.Network, name string, nNeurY, nNeurX int) *TRCLayer

AddTRCLayer2D adds a TRCLayer of given size, with given name.

func AddTRCLayer4D added in v1.1.2

func AddTRCLayer4D(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *TRCLayer

AddTRCLayer4D adds a TRCLayer of given size, with given name.

func (*TRCLayer) Class added in v1.1.2

func (ly *TRCLayer) Class() string

func (*TRCLayer) Defaults added in v1.1.2

func (ly *TRCLayer) Defaults()

func (*TRCLayer) DriverLayer added in v1.1.2

func (ly *TRCLayer) DriverLayer(drv *Driver) (*leabra.Layer, error)

DriverLayer returns the driver layer for given Driver

func (*TRCLayer) GFmInc added in v1.1.2

func (ly *TRCLayer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*TRCLayer) InitWts added in v1.1.5

func (ly *TRCLayer) InitWts()

func (*TRCLayer) IsTarget added in v1.1.19

func (ly *TRCLayer) IsTarget() bool

func (*TRCLayer) SetDriverActs added in v1.1.4

func (ly *TRCLayer) SetDriverActs()

SetDriverActs sets the driver activations, integrating across all the driver layers

func (*TRCLayer) SetDriverNeuron added in v1.1.4

func (ly *TRCLayer) SetDriverNeuron(tni int, drvGe, drvInhib float32)

SetDriverNeuron sets the driver activation for given Neuron, based on given Ge driving value (use DriveFmMaxAvg) from driver layer (Burst or Act)

func (*TRCLayer) SetDriverOffs added in v1.1.4

func (ly *TRCLayer) SetDriverOffs() error

SetDriverOffs sets the driver offsets

func (*TRCLayer) UpdateParams added in v1.1.2

func (ly *TRCLayer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer

type TRCParams added in v1.0.0

type TRCParams struct {

	// [def: false] Turn off the driver inputs, in which case this layer behaves like a standard layer
	DriversOff bool `def:"false" desc:"Turn off the driver inputs, in which case this layer behaves like a standard layer"`

	// Quarter(s) when bursting occurs -- typically Q4 but can also be Q2 and Q4 for beta-frequency updating.  Note: this is a bitflag and must be accessed using its Set / Has etc routines
	BurstQtr leabra.Quarters `` /* 188-byte string literal not displayed */

	// [def: 0.3] [min: 0.0] multiplier on driver input strength, multiplies activation of driver layer
	DriveScale float32 `def:"0.3" min:"0.0" desc:"multiplier on driver input strength, multiplies activation of driver layer"`

	// [def: 0.6] [min: 0.01] Level of Max driver layer activation at which the predictive non-burst inputs are fully inhibited.  Computationally, it is essential that driver inputs inhibit effect of predictive non-driver (CTLayer) inputs, so that the plus phase is not always just the minus phase plus something extra (the error will never go to zero then).  When max driver act input exceeds this value, predictive non-driver inputs are fully suppressed.  If there is only weak burst input however, then the predictive inputs remain and this critically prevents the network from learning to turn activation off, which is difficult and severely degrades learning.
	MaxInhib float32 `` /* 662-byte string literal not displayed */

	// Do not treat the pools in this layer as topographically organized relative to driver inputs -- all drivers compress down to give same input to all pools
	NoTopo bool `` /* 159-byte string literal not displayed */

	// [min: 0] [max: 1] proportion of average across driver pools that is combined with Max to provide some graded tie-breaker signal -- especially important for large pool downsampling, e.g., when doing NoTopo
	AvgMix float32 `` /* 209-byte string literal not displayed */

	// Apply threshold to driver burst input for computing plus-phase activations -- above BinThr, then Act = BinOn, below = BinOff.  This is beneficial for layers with weaker graded activations, such as V1 or other perceptual inputs.
	Binarize bool `` /* 234-byte string literal not displayed */

	// [viewif: Binarize] Threshold for binarizing in terms of sending Burst activation
	BinThr float32 `viewif:"Binarize" desc:"Threshold for binarizing in terms of sending Burst activation"`

	// [def: 0.3] [viewif: Binarize] Resulting driver Ge value for units above threshold -- lower value around 0.3 or so seems best (DriveScale is NOT applied -- generally same range as that).
	BinOn float32 `` /* 190-byte string literal not displayed */

	// [def: 0] [viewif: Binarize] Resulting driver Ge value for units below threshold -- typically 0.
	BinOff float32 `def:"0" viewif:"Binarize" desc:"Resulting driver Ge value for units below threshold -- typically 0."`
}

TRCParams provides parameters for how the plus-phase (outcome) state of thalamic relay cell (e.g., Pulvinar) neurons is computed from the corresponding driver neuron Burst activation.

func (*TRCParams) Defaults added in v1.0.0

func (tp *TRCParams) Defaults()

func (*TRCParams) DriveGe added in v1.1.2

func (tp *TRCParams) DriveGe(act float32) float32

DriveGe returns effective excitatory conductance to use for given driver input Burst activation

func (*TRCParams) GeFmMaxAvg added in v1.1.6

func (tp *TRCParams) GeFmMaxAvg(max, avg float32) float32

GeFmMaxAvg returns the drive Ge value as function of max and average

func (*TRCParams) Update added in v1.0.0

func (tp *TRCParams) Update()

type TRNLayer added in v1.1.4

type TRNLayer struct {
	leabra.Layer

	// layers that we receive inhibition from
	ILayers emer.LayNames `desc:"layers that we receive inhibition from"`
}

TRNLayer copies inhibition from pools in CT and TRC layers, and from other TRNLayers, and pools this inhibition using the Max operation

func (*TRNLayer) Defaults added in v1.1.4

func (ly *TRNLayer) Defaults()

func (*TRNLayer) InitActs added in v1.1.4

func (ly *TRNLayer) InitActs()

InitActs fully initializes activation state -- only called automatically during InitWts

type TopoInhib added in v1.1.4

type TopoInhib struct {

	// use topographic inhibition
	On bool `desc:"use topographic inhibition"`

	// half-width of topographic inhibition within layer
	Width int `desc:"half-width of topographic inhibition within layer"`

	// normalized gaussian sigma as proportion of Width, for gaussian weighting
	Sigma float32 `desc:"normalized gaussian sigma as proportion of Width, for gaussian weighting"`

	// overall inhibition multiplier for topographic inhibition (generally <= 1)
	Gi float32 `desc:"overall inhibition multiplier for topographic inhibition (generally <= 1)"`

	// layer-level baseline inhibition factor for Max computation -- ensures a baseline inhib as proportion of maximum inhib within any single pool
	LayGi float32 `` /* 147-byte string literal not displayed */

	// gaussian weights as function of distance, precomputed.  index 0 = dist 1
	Wts []float32 `inactive:"+" desc:"gaussian weights as function of distance, precomputed.  index 0 = dist 1"`
}

TopoInhib provides for topographic gaussian inhibition integrating over neighborhood. Effective inhibition is

func (*TopoInhib) Defaults added in v1.1.4

func (ti *TopoInhib) Defaults()

func (*TopoInhib) Update added in v1.1.4

func (ti *TopoInhib) Update()

type TopoInhibLayer added in v1.1.4

type TopoInhibLayer struct {
	leabra.Layer // access as .Layer

	// topographic inhibition parameters for pool-level inhibition (only used for layers with pools)
	TopoInhib TopoInhib `desc:"topographic inhibition parameters for pool-level inhibition (only used for layers with pools)"`
}

TopoInhibLayer is a layer with topographically organized inhibition among pools

func (*TopoInhibLayer) Defaults added in v1.1.4

func (ly *TopoInhibLayer) Defaults()

func (*TopoInhibLayer) InhibFmGeAct added in v1.1.4

func (ly *TopoInhibLayer) InhibFmGeAct(ltime *leabra.Time)

InhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools

func (*TopoInhibLayer) TopoGi added in v1.1.4

func (ly *TopoInhibLayer) TopoGi(ltime *leabra.Time)

TopoGi computes topographic Gi between pools

func (*TopoInhibLayer) TopoGiPos added in v1.1.4

func (ly *TopoInhibLayer) TopoGiPos(py, px, d int) float32

TopoGiPos returns position-specific Gi contribution

func (*TopoInhibLayer) UpdateParams added in v1.1.4

func (ly *TopoInhibLayer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL