Common toads evaluate the absolute size of prey binocularly and/or monocularly (Ewert & Gebauer 1973; cit. Ewert 1984). Among configurationally “neutral” square black prey dummies (Figure 2.8As), they prefer an object—at different distances—of about s = 10 mm edge length, generally:
s = 0.37 w for 5 ≤ w ≤ 26 [mm]; mouth-width w
including, e.g., the anuran species Bufo, Rana, Hyla, Bombina, Alytes. The size constancy phenomenon performs with stereoscopic vision after metamorphosis (Ewert & Burghagen 1969; cit. Ewert 1984). Monocularly, it requires an estimation of object’s distance calculated in connection with toad’s movement relative to the object: e.g., by triangulation (brief head shifting) and/or accommodation (lens shifting) (Collett 1977). Depth information, for example, reaches T5-type neurons.
Evidence: In pharmacologically immobilized awake toads, T5-type neurons measure objects in degrees of visual angle (Figure 2.8B,s). However, in free-moving toads, T5-type neurons are sensitive to object’s absolute size (Spreckelsen et al. 1995). So, in accordance with that, the features(p,c)-relating-algorithm is traceable in scales of absolute or visual angular size (Figure 2.8 cf. A,B).
Visuomotor access
Prey-selective T5.2-neurons send their axons from the optic tectum to the medullary premotor/motor cells that innervate jaw and tongue muscles (Weerasuriya & Ewert 1981, Satou & Ewert 1983; cit. Ewert 1984; see also Table 2.6).
Table 2.6 Minimum number of cell types linking photoreception and snapping reaction. Additional interacting cells (e.g., retinal horizontal and amacrine cells; pretectal/thalamic neurons) have tuning and specifying functions at different processing levels.
Such a neuron can be examined by recording its activity with an electrode chronically implanted in the tectum of a freely moving toad in response to a prey-like stripe: a strong burst of impulses preceded toad’s tongue-projection at prey (Schürg-Pfeiffer et al. 1993). The same stripe presented in threat configuration elicited little neuronal activity and no prey-catching, probably due to pretectal/thalamic inhibition (see Figure 2.10B).
Test: If a second electrode—fastened at the skull—delivered an electrolytic lesion to the ipsilateral pretectal thalamus, a moving stripe strongly activated the prey-selective T5.2 neuron and elicited prey-capture regardless of whether the stripe was presented in prey or threat configuration. Pretectal/thalamic lesion (Figure 2.10D) impaired the discrimination between prey and threat both neuronally and behaviorally—hence evidencing linkage between prey-selective neuronal activity and prey-catching behavior (Schürg-Pfeiffer et al. 1993).
No motor command can be issued when motivation and attention are not appropriate: if a toad was satiated after feeding on mealworms or frightened by some noise by the experimenter, a prey object neither activated T5.2 neurons nor any prey capture.
Modification of species-specific feature detection by learning
Snapping a hive-bee, the painful/distasteful incident—conditioned with bee’s appearance—prevents a toad from catching such bees again (Cott 1936).
Can a toad be trained to catch a threatening stimulus? Usually, toads are frightened by a moving hand. Feeding a toad daily with a mealworm presented in the experimenter’s hand, the toad associated the hand with food and became tame (Brzoska & Schneider 1978). Approximately after a fortnight of hand-feeding—once a day—the moving hand alone released snapping and, generalizing, a moving large square or threat-like stripe was included to the toad’s prey category. The species-specific prey recognition was extended by individual experience—in terms of classic ethology, a “modified IRM” developed (Ewert 1997; see also Further Reading, Movie A3).
How can this modification be explained? Recalling the “window hypothesis,” we suggest in the telencephalon a neural structure that—during hand-feeding training—becomes sensitized to the contiguous presentation of prey and threat signals. In the learning phase, the sensitized neurons inhibit threat detection, which leads to a sort of “disinhibition” in prey detection.
How was this checked out? The study passed three steps: Step 1: a toad—left eye covered—was trained by hand-feeding. Step 2: during prey-catching of the trained toad—eyes uncovered—toward a moving large square, the 14C-2DG-uptake rose in the posterior ventromedial pallium of the left telencephalon (Figure 2.9 Ba, MP). This structure obtains visual information from the right eye (uncovered during training) and projects to thalamic/pretectal regions. This telencephalic pallial structure is homologous to the mammalian hippocampus known to be involved in learning. Step 3: after lesions to that pallial structure, the training effects disappeared and the property of species-specific prey recognition—classic ethologically speaking, the “IRM”—reappeared (Ewert et al. 1994).
Summarizing, prey-catching releasing systems in toads contain—inter alia—midbrain neurons with species-specific prey-selective characteristics. Ontogenetically speaking, configurational prey selection is present after toad’s metamorphosis in the context of aquatic-to-terrestrial transition. It can be modified by individual experience, e.g., via a forebrain-loop involving ventromedial pallium, the “primordium hippocampi.”
There are promising studies concerning modulatory functions of diencephalic pretectal/thalamic and hypothalamic nuclei on the stimulus-response pathways that mediate prey-catching and threat-avoiding (e.g., see Ewert & Schwippert 2006; Islam et al. 2019; Prater et al. 2020).
Sensorimotor codes
The concept of command releasing system CRS interprets Tinbergen’s concept of (innate) releasing mechanism in a neurophysiological context.
A CRS considers combinatorial aspects of stimulus perception as a sensorimotor code in a sensorimotor interface. A coded command involves different types of neurons, each type monitoring or analyzing a certain stimulus aspect, e.g., prey-selective T5.2 neurons. The idea is that a certain combination of such command elements cooperatively activates a certain motor pattern–generating system in the presence of adequate motivational and attentional inputs. It is suggested that certain command elements can be shared by different sensorimotor codes.
Modeling toad’s visual pattern recognition
Building on the neuroethological results of the toad’s visual system (e.g., Figure 2.10), artificial pattern recognizers were developed—using systems theoretical approaches (Ewert & v. Seelen 1974; cit. Ewert 1984)—computer models taking advantage of the relevant cytological brain structures (Lara et al. 1982), and artificial neuronal nets, ANNs, trained by backpropagation algorithms (Ewert 2004). Different ANNs applying algorithms for reinforcement learning, classical contitioning, and genetic operations are described by Reddipogu et al. (2002) and Yoshida (2016). Hence, there are various ways of modeling brain/behavior functions: global models are heuristic; ANNs subserve approximation and optimization, e.g., by implementation of an algorithm.
Why modeling? 1) A model offers a representation of the processes within the modeled system. Hence, models have explanatory function. 2) Models are predictive. Predictions can be tested by adequate experiments. The results, in turn, may improve the model. 3) Models are sort of creative since they may exhibit unexpected properties. 4) Models provide tools toward artificial intelligence, such as in the growing field of neuroengineering.
For example, the German Federal Ministry for Research and Technology (BMFT) supported a joint project called “Sensori-Motor