Buscador de artigos científicos
Pesquise registros catalogados.
As equivalências entre idiomas na busca são geradas automaticamente e podem conter erros. Para resultados mais precisos, pesquise também o termo diretamente no idioma desejado.
Issue Information
Ano: 2025
Issue Information
Ano: 2025
Issue Information
Ano: 2025
Issue Information
Ano: 2025
Issue Information
Ano: 2025
Low occasion setter salience results in learning conditional stimulus partial reinforcement instead of occasion setting
Ano: 2025
Abstract In real‐world settings, stimulus and outcome associations often depend on situational factors, such as Pavlovian occasion setters (OSs), which disambiguate whether a conditional stimulus (CS) will predict an outcome (unconditional stimulus; US). Whereas previous studies show that OSs are often lower in salience than CSs, no study has examined how low‐salience OSs affect learning. In two conditioning experiments, we investigated this from the premise that inconsistently reinforced CSs prompt searching for additional stimuli (OSs) that indicate whether the CS will be followed by the US. Occasion setting learning was assessed using extinction rate—as partial reinforcement slows extinction relative to continuous reinforcement—and self‐reported latent learning of stimuli. We hypothesized that a high‐salience OS would result in faster extinction rates and occasion setting learning, whereas a low‐salience OS would result in slower extinction rates and CS partial reinforcement learning. The results of Experiment 1 were mixed; there was no effect of OS salience on extinction rate, but the results for latent learning supported the hypothesis. We conducted Experiment 2 to specifically test extinction rate, and the results supported our hypothesis. The findings suggest that if a salient OS is found, occasion setting is learned; otherwise, CS partial reinforcement is learned.
Machine learning to detect schedules using spatiotemporal data of behavior: A proof of concept
Ano: 2025
Abstract Traditionally, the experimental analysis of behavior has relied on the single discrete response paradigm (e.g., key pecks, lever presses, screen clicks) to identify behavioral patterns. However, the development and availability of new technology allow researchers to move beyond this paradigm and use other features to detect schedules. Thus, our study used spatiotemporal data to compare the accuracy of four machine learning algorithms (i.e., logistic regression, support vector classifiers, random forests, and artificial neural networks) in detecting the presence and the components of time‐based schedules in 12 rats involved in a behavioral experiment. Using spatiotemporal data, the algorithms accurately identified the presence or absence of programmed schedules and correctly differentiated between fixed‐ and variable‐space schedules. That said, our analyses failed to identify an algorithm to discriminate fixed‐time from variable‐time schedules. Furthermore, none of the algorithms performed systematically better than the others. Our findings provide preliminary support for the utility of using spatiotemporal data with machine learning to detect stimulus schedules.
Of rats and robots: A mutual learning paradigm
Ano: 2025
Abstract Robots are increasingly used alongside Skinner boxes to train animals in operant conditioning tasks. Similarly, animals are being employed in artificial intelligence research to train various algorithms. However, both types of experiments rely on unidirectional learning, where one partner—the animal or the robot—acts as the teacher and the other as the student. Here, we present a novel animal–robot interaction paradigm that enables bidirectional, or mutual, learning between a Wistar rat and a robot. The two agents interacted with each other to achieve specific goals, dynamically adjusting their actions based on the positive (rewarding) or negative (punishing) signals provided by their partner. The paradigm was tested in silico with two artificial reinforcement learning agents and in vivo with different rat–robot pairs. In the virtual trials, both agents were able to adapt their behavior toward reward maximization, achieving mutual learning. The in vivo experiments revealed that rats rapidly acquired the behaviors necessary to receive the reward and exhibited passive avoidance learning for negative signals when the robot displayed a steep learning curve. The developed paradigm can be used in various animal–machine interactions to test the efficacy of different learning rules and reinforcement schedules.
On the prevalence and magnitude of resurgence during delay‐and‐denial tolerance teaching
Ano: 2025
Abstract Resurgence is the recurrence of target behavior (e.g., challenging behavior) during a worsening of reinforcement conditions (e.g., increases in response effort, decreases in alternative reinforcement). Previous studies have examined the prevalence and magnitude of resurgence during functional communication training implemented with discriminative stimuli. We conducted a systematic review of the literature to analyze the magnitude and prevalence of resurgence during delay‐and‐denial tolerance teaching. Similar to previous studies with discriminative stimuli, resurgence occurred for most participants and in about one third of transitions. When resurgence was present, challenging behavior increased to approximately 26% of baseline levels. Resurgence was less likely to occur during response‐effort manipulations (i.e., complexity teaching, tolerance‐response teaching) and was most likely to occur during increases in delays that ended following the passage of time rather than a response criterion. We discuss implications for treatment refinements and future treatment‐relapse research.