Skip to content

Commit

Permalink
formatting error with MathLang fixed
Browse files Browse the repository at this point in the history
  • Loading branch information
ferponcem committed Sep 18, 2024
1 parent 0261dcd commit dcbd946
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion ibc_data/ibc_tasks.tsv
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ LePetitPrince This experiment is a natural language comprehension protocol, orig
LPPLocalizer **Le Petit Prince Localizer** was included as part of the :ref:`LePetitPrince` task and was performed at the end of the second acquisition. It aimed to accurately map the language areas of each participant, which would later be used for further analysis. The stimuli consisted of two types of audio clips: phrases and their reversed versions. The phrases were 2-second voice recordings (audio only) of context-free sentences in French. The reversed stimuli used the same clips but played backward, making the content unintelligible. The run consisted of alternating blocks of 3 trials with phrases (French trials) and 3 trials with reversed phrases (control trials). This localizer was conducted in a single run, lasting 6 minutes and 32 seconds. Expyriment 0.9.0 (Python 3.6) OptoACTIVE (Optoacoustics)
BiologicalMotion1 "The phenomenon known as *biological motion* was first introduced in (`Johansson, 1973 <https://doi.org/10.3758/BF03212378>`__), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (`Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__). During the task, the participants were shown a point-light ""walker"", and they had to decide if the walker's orientation was to the left or right, by pressing on the response box respectively on the index finger's button or the middle finger's button. The stimuli were divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see `Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__ for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. There were 2 different types of runs: type 1 and 2. This section refers to run type 1, which contained both global types (natural and inverted) and both local naturals. For run type 2 refer to :ref:`BiologicalMotion2`." Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1024x768 `See demo <https://www.youtube.com/watch?v=VDfsUzu8_Gw>`__
BiologicalMotion2 "The phenomenon known as *biological motion* was first introduced in (`Johansson, 1973 <https://doi.org/10.3758/BF03212378>`__), and consisted in point-light displays arranged and moving in a way that resembled a person moving. The task that we used was originally developed by (`Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__). During the task, the participants were shown a point-light ""walker"", and they had to decide if the walker's orientation was to the left or right, by pressing on the response box respectively on the index finger's button or the middle finger's button. The stimuli was divided in 6 different categories: three types of walkers, as well as their reversed versions. The division of the categories focuses on three types of information that the participant can get from the walker: global information, local information and orientation. Global information refers to the general structure of the body and the spatial relationships between its parts. Local information refers to kinematics, speed of the points and mirror-symmetric motion. Please see `Chang et al., 2018 <https://doi.org/10.1016/j.neuroimage.2018.03.013>`__ for more details about the stimuli. The data was acquired in 4 runs. Each run comprises 12 blocks with 8 trials per block. The stimulus duration was 500ms and the inter-stimulus interval 1500ms (total 16s per block). Each of the blocks was followed by a fixation block, that also lasted 16s. Each run contained 4 of the six conditions, repeated 3 times each. This section refers to run type 2, which contained both local naturals and both local modified." Psychophysics Toolbox Version 3 (PTB-3), aka Psychtoolbox-3, for GNU Octave Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1024x769
MathLanguage in order to accurately map language areas for each participant. :raw-html:`<br />` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-08*, for whom we employed MRConfon MKII. Expyriment 0.9.0 (Python 3.6) In-house custom-made sticks featuring one-top button, each one to be used in each hand OptoACTIVE (Optoacoustics) 1920x1080 `See demo <https://youtu.be/FuOiQHS2764>`__ `Repository <https://github.com/individual-brain-charting/public_protocols/tree/master/MathLanguage>`__
MathLanguage The **Mathematics and Language** protocol was taken from (`Amalric et al., 2016 <https://doi.org/10.1073/pnas.1603205113>`__). This task aims to comprehensively capture the activation related with several types of mathematical and other types of facts, presented as sentences. During the task, the participants are presented a series of sentences, each one in either of two modalities: auditory or visual. Some of the categories include theory of mind statements, arithmetic facts and geometry facts. After each sentence, the participant has to indicate whether they believe the presented fact to be true or false, by respectively pressing the button in the left or right hand. A second version of each run (runs *B*) was generated reverting the modality for each trial, so those being visual in the original runs (runs *A*), would be auditory in their corresponding *B* version, and vice-versa. Each participant performed four A-type runs, followed three B-type runs due to time constraints. Each run had an equal number of trials of each category, and the order of the trials was the same for all subjects. :raw-html:`<br />` **Note:** We used the OptoACTIVE (Optoacoustics) audio device for all subjects except for *subject-05* and *subject-08*, who completed the session using MRConfon MKII. Expyriment 0.9.0 (Python 3.6) In-house custom-made sticks featuring one-top button, each one to be used in each hand OptoACTIVE (Optoacoustics) 1920x1080 `See demo <https://youtu.be/FuOiQHS2764>`__ `Repository <https://github.com/individual-brain-charting/public_protocols/tree/master/MathLanguage>`__
SpatialNavigation This protocol, an adaptation from the one used in (`Diersch et al., 2021 <https://doi.org/10.1523/JNEUROSCI.0528-20.2021>`__), was originally designed to capture the effects of spatial encoding and orientation learning in different age groups. The task demands subjects to navigate and orientate themselves in a complex virtual environment that resembled a typical German historic city center, consisting of town houses, shops and restaurant. There are three parts of this task: introduction (outside of the scanner), encoding (in scanner) and retrieval (in scanner). Before entering the scanner, the participants went through an introduction phase, during which they had the freedom to navigate the virtual environment with the objective of collecting eight red balls scattered throughout various streets of the virtual city. During this part, the participants could familiarize themselves with the different buildings and learn the location of the two target buildings: Town Hall and Church. After they collect all the red balls, a short training of the main task was performed to ensure the correct understanding of the instructions. :raw-html:`<br />` Then, participants went to the scanner. The task began with the encoding phase. During this period, the participant had to passively watch the camera move from one target building to the other, in such a way that every street of the virtual environment is passed through in every direction possible. Participants were instructed to pay close attention to the spatial layout of the virtual environment and the location of the target landmarks. Passive transportation instead of self-controlled traveling was chosen to ensure that every participant experienced the virtual environment for the same amount of time. After the encoding phase, the retrieval phase started, which consisted of 8 experimental trials and 4 control trials per run. In each trial, the participant was positioned near an intersection within the virtual environment, which was enveloped in a dense fog, limiting visibility. Subsequently, the camera automatically approached the intersection and centered itself. The participant’s task was to indicate the direction of the target building, which was displayed as a miniature picture at the bottom of the screen. Control and experimental trials were identical, but during control trials the participant had to point to one of the buildings of the intersection that had been colored in blue instead of the target building. All of the runs, except the first one, began with the encoding phase, followed by the retrieval phase. In the initial run, a control trial of the retrieval phase preceded the standard design of the encoding phase followed by the retrieval phase. Vizard 6 Five-button ergonomic pad (current designs, package 932 with pyka hhsc-1x5-n4) 1920x1080 `Repository <https://github.com/individual-brain-charting/public_protocols/tree/master/SpatialNavigation>`__
GoodBadUgly "The GoodBadUgly task was adapted from the study by (`Mantini et al., 2012 <https://doi.org/10.1038/nmeth.1868>`__), which was dedicated to investigate the correspondence between monkey and human brains using naturalistic stimuli. The task relies on watching - viewing and listening - the whole movie ""The Good, the Bad and the Ugly"" by Sergio Leone. For IBC, the French-dubbed version ""Le Bon, la Brute et le Truand"" was presented. The original 177-minute movie was cut into approximately 10-minute segments to match the segment length of the original study, which presented only three 10-minute segments from the middle of the movie. This resulted in a total of 18 segments (the last segment being only 4.5 minutes long). This task was performed during three acquisition sessions with seven segments each, one segment per run. The first three segments were repeated during the final acquisition after the entire movie had been completed." Expyriment 0.9.0 (Python 2.7) 1920x1080
EmoMem "This task is a part of the CamCAN (`Cambridge Centre for Ageing and Neuroscience <https://www.cam-can.org/>`__) battery, designed to understand how individuals can best retain cognitive abilities into old age. The adjustments concerned the translation of all stimuli and instructions into french, replacing Matlab functions with Octave functions as needed, and eliminating the use of a custom Matlab toolbox `mrisync <https://github.com/MRC-CBU/mrisync>`__ that was used to interface with the MRI Scanner (3T Siemens Prisma) over a National Instruments card. All modifications were done taking care to not alter the psychological state that the original tasks were designed to capture. The **Emotional Memory** task was designed to provide an assessment of implicit and explicit memory, and how it is affected by emotional valence. At the IBC we only conducted the encoding part of the task the Study phase as mentioned in (`Shafto et al., 2014 <https://doi.org/10.1186/s12883-014-0204-1>`__) but not the Test phase that happened outside the scanner in the original study. In each trial, participants were presented with a background picture for 2 seconds, followed by a foreground picture of an object superimposed on it. Participants were instructed to imagine a ""story"" linking the background and foreground picture, and after an 8-second presentation, the next trial began. The manipulation of emotional valence exclusively affected the background image, which could be negative, neutral, or positive. Participants were asked to indicate the moment they thought of a story or a connection between the object and the background image by pressing a button. In all, 120 trials were presented over 2 runs." Octave 4.4 + Psychtoolbox 3.0 Five-button ergonomic pad (Current Designs, Package 932 with Pyka HHSC-1x5-N4) 800x600
Expand Down

0 comments on commit dcbd946

Please sign in to comment.