Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Eye movements: Dr. A & Dr. B Part-30

6 minute read

Published:

Dr. A: The recent surge in computational models for eye movement analysis offers profound insights into individual differences. Take, for instance, Rayner’s extensive review on eye movements in reading and information processing, where cognitive processes are illuminated through eye movement data (Rayner, 1998).

Eye movements: Dr. A & Dr. B Part-29

8 minute read

Published:

Dr. A: Recent advancements in computational models have significantly enhanced our understanding of individual differences in eye movements. Sungbok Shin et al. (2022) discuss the use of crowdsourced eye movement data to predict gaze heatmaps on visualizations, emphasizing the diverse apparatus and techniques for collecting eye movements (Shin et al., 2022).

Eye movements: Dr. A & Dr. B Part-28

8 minute read

Published:

Dr. A: The integration of computational cognitive models with eye movement data offers a profound insight into the cognitive mechanisms underlying visual tasks. As illustrated by Balint et al. (2015), computational models can predict human eye movements in visual decision-making tasks, enhancing our understanding of how cognitive strategies and task demands influence eye movements (Balint, Reynolds, Blaha, & Halverson, 2015).

Eye movements: Dr. A & Dr. B Part-27

11 minute read

Published:

Dr. A: The evolution of computational visual attention models has certainly enhanced our understanding of visual perception, especially in medical imaging. A comparative study by Wen et al. (2017) highlighted the necessity of modality-specific tuning of saliency models for improved accuracy in predicting radiologists’ eye movements across different medical imaging modalities (Wen et al., 2017).

Eye movements: Dr. A & Dr. B Part-26

9 minute read

Published:

Dr. A: Let’s discuss the intriguing concept of computational models unveiling individual differences in eye movements, particularly through saccades. The saccadic system, with its exceptional precision, serves as a window into cognitive control and behavioral patterns. Notably, Hutton (2008) outlines how cognitive processes, including working memory and attention, significantly impact saccade parameters, providing insights into cognitive function across various psychopathologies (Hutton, 2008).

Eye movements: Dr. A & Dr. B Part-25

15 minute read

Published:

Dr. A: The visualization techniques for cognitive models, as presented by J. Balint and colleagues (2015), offer a profound insight into understanding eye movements. They discuss how computational cognitive models predict eye movements in visual decision-making tasks, highlighting the influence of cognitive strategies on observable movements (J. Balint et al., 2015).

Eye movements: Dr. A & Dr. B Part-24

8 minute read

Published:

Dr. A: As we delve into computational models, especially those like DeepGaze, we recognize their capability in reflecting individual differences in eye movements, a crucial aspect of understanding visual perception.

Eye movements: Dr. A & Dr. B Part-23

16 minute read

Published:

Dr. A: It’s fascinating how computational models can reveal individual differences in eye movements, indicating that these differences are not just noise but reflect underlying cognitive processes (Balint et al., 2015)(Balint et al., 2015).

Eye movements: Dr. A & Dr. B Part-22

7 minute read

Published:

Dr. A: Have you seen the latest from van Dyck et al., 2022? They’ve shown how human eye-tracking data can directly modify training examples, guiding models’ visual attention during object recognition. Fascinatingly, they managed to guide models away from human-like fixations, showing category-specific effects, especially enhanced by animacy and face presence (van Dyck, Denzler, & Gruber, 2022).

Eye movements: Dr. A & Dr. B Part-21

5 minute read

Published:

Dr. A: Have you seen the latest from van Dyck et al., 2022? They’ve shown how human eye-tracking data can directly modify training examples, guiding models’ visual attention during object recognition. Fascinatingly, they managed to guide models away from human-like fixations, showing category-specific effects, especially enhanced by animacy and face presence (van Dyck, Denzler, & Gruber, 2022).

Expertise Hypothesis: Dr. A & Dr. B Part-20

9 minute read

Published:

Dr. A: The debate between domain generality versus specificity is pivotal in understanding cognitive processes, including neural responses to categories and task-specific training. For instance, studies like (Noppeney et al., 2006) have shown that tool-selective responses in the human brain are mediated by distinct mechanisms that engender category selectivity. This suggests a degree of domain specificity in neural processes.

Expertise Hypothesis: Dr. A & Dr. B Part-19

13 minute read

Published:

Dr. A: The expertise hypothesis often posits that superior performance is predominantly the result of extensive practice within a specific domain. However, recent research challenges this view, suggesting a multifactorial model where genetics and environmental interactions also play critical roles. For example, Ullén, Hambrick, and Mosing (2016) highlight the influence of genetic factors alongside practice in expert performance. This indicates that expertise cannot be fully explained by deliberate practice alone (Ullén, Hambrick, & Mosing, 2016).

Expertise Hypothesis: Dr. A & Dr. B Part-18

5 minute read

Published:

Dr. A: The concept of Brain-Like Functional Specialization suggests a distributed network of regions each with dissociable functional roles, such as those observed in the attention network, which includes cortical and subcortical structures like the frontal and parietal cortices and the superior colliculus (Fiebelkorn & Kastner, 2020). This dynamism offers cognitive flexibility, necessary for adapting to highly dynamic environments.

Expertise Hypothesis: Dr. A & Dr. B Part-17

28 minute read

Published:

Dr. A: The recognition of visual objects is not solely about their physical appearance but significantly involves how these objects are processed in the brain. For instance, a study on the differential effect of stimulus inversion on face and object recognition suggests that inverted faces are processed by mechanisms for the perception of other objects rather than by face perception mechanisms (J. Haxby et al., 1999). This indicates a specialized mechanism for face perception that operates differently from the recognition of general objects.

Expertise Hypothesis: Dr. A & Dr. B Part-16

19 minute read

Published:

Dr. A: The intriguing aspect of perceptual expertise lies in its manifestation through sensory-specific learning, enhancing our ability to discern subtle distinctions within our environments. This concept has profound implications across various domains, notably in the context of wine expertise, where it’s observed that despite the modest improvements in chemosensory detection thresholds, cognitive and semantic advancements play a pivotal role in expert differentiation (Spence, 2019).

Expertise Hypothesis: Dr. A & Dr. B Part-15

9 minute read

Published:

Dr. A: The expertise hypothesis suggests domain-specificity plays a critical role in cognitive functions, but I argue that domain-general mechanisms offer a broader, more adaptable framework for understanding intelligence. Take, for example, the evolution of domain-general mechanisms in intelligence and learning, which have been shown to be powerful tools for solving novel problems by manipulating information from various modules (Chiappe & Macdonald, 2005).

Expertise Hypothesis: Dr. A & Dr. B Part-14

15 minute read

Published:

Dr. A: Recent research on expertise, specifically the expertise hypothesis, underscores a multifaceted view, contrasting sharply with older theories that primarily attributed expert performance to extensive practice. Ullén, Hambrick, and Mosing (2016) challenge the deliberate practice theory, proposing a multifactorial gene-environment interaction model that accounts for cognitive abilities and genetic factors alongside practice (Ullén, Hambrick, & Mosing, 2016).

Expertise Hypothesis: Dr. A & Dr. B Part-12

13 minute read

Published:

Dr. A: The Fusiform Face Area, or FFA, is traditionally considered central to face perception, distinguishing faces from non-face objects. Yet, studies such as McGugin et al. (2016) challenge this by suggesting the FFA’s role extends to domain-general object perception, showing a relationship between cortical thickness in FFA and recognition performance for both faces and objects (McGugin et al., 2016).

Expertise Hypothesis: Dr. A & Dr. B Part-11

10 minute read

Published:

Dr. A: Considering the fusiform face area’s (FFA) role in processing invariant facial aspects, recent models suggest a dichotomy with the superior temporal sulcus (STS) handling dynamic aspects. However, the ventral-dorsal stream division might be oversimplified. Accumulating neuroimaging evidence proposes an update emphasizing dissociation between form and motion, urging exploration into dynamic faces (Bernstein & Yovel, 2015).

Face Pareidolia: Dr. A & Dr. B Part-10

7 minute read

Published:

Dr. A: The neural underpinnings of face processing reveal a division between the fusiform face area (FFA), focusing on invariant aspects such as identity, and the posterior superior temporal sulcus (pSTS), which processes changeable aspects like expression. Bernstein and Yovel’s (2015) review suggests updating models to emphasize form and motion’s dissociation, a ventral stream through the FFA, and a dorsal stream through the STS for dynamic faces (Bernstein & Yovel, 2015).

Face Pareidolia: Dr. A & Dr. B Part-9

9 minute read

Published:

Dr. A: Considering face pareidolia, we start with evolutionary perspectives. Zhou and Meng (2020) elucidate the phenomenon as a basic cognitive process, suggesting it could have evolutionary roots in identifying threats quickly. They touch upon individual differences in experiencing pareidolia, highlighting its complexity and potential adaptive advantages. (Zhou & Meng, 2020)

Face Pareidolia: Dr. A & Dr. B Part-8

2 minute read

Published:

Dr. A: Let’s delve into the intricacies of face pareidolia. It’s fascinating how our cognitive systems are wired to recognize faces even where none exist. The prefrontal cortex and the fusiform face area play pivotal roles, as highlighted by Akdeniz et al. (2018), showing activation in these areas during pareidolia.

Face Pareidolia: Dr. A & Dr. B Part-7

5 minute read

Published:

Dr. A: Fascinatingly, the phenomenon of face pareidolia, where we see faces in inanimate objects, taps into the brain’s core mechanisms for face and object recognition. Wardle et al. (2018) found that magnetoencephalography (MEG) reveals the human brain’s dynamic response to illusory faces is not fully captured by existing computational models of visual saliency but is somewhat predicted by categorizing stimuli into faces versus objects (Wardle et al., 2018).

Face Pareidolia: Dr. A & Dr. B Part-6

13 minute read

Published:

Dr. A: Have you considered how face pareidolia might significantly hinge on individual differences, as Zhou and Meng (2020) suggest? Their review illuminates vast differences in face pareidolia experiences, influenced by sex, developmental stages, and neurodevelopmental factors (Liu-Fang Zhou & Ming Meng, 2020).

Face Pareidolia: Dr. A & Dr. B Part-5

11 minute read

Published:

Dr. A: Regarding face pareidolia, recent studies have significantly advanced our understanding. For instance, Hao Wang and Zhigang Yang (2018) discussed the neural mechanisms involved, highlighting the importance of both top-down and bottom-up factors in the occurrence of face pareidolia. They noted the crucial role of the fusiform face area (FFA) in integrating information from frontal and occipital vision regions (Hao Wang & Zhigang Yang, 2018).

Face Pareidolia: Dr. A & Dr. B Part-4

9 minute read

Published:

Dr. A: The phenomenon of face pareidolia, where we perceive facial features on inanimate objects, is fascinating from both a psychological and neural perspective. Studies have shown that both top-down and bottom-up factors modulate its occurrence, involving the fusiform face area (FFA) when experiencing pareidolia (Hao Wang & Zhigang Yang, 2018).

Face Pareidolia: Dr. A & Dr. B Part-3

7 minute read

Published:

Dr. A: The phenomenon of face pareidolia, where we discern faces in inanimate objects, taps into our brain’s face-specific processing capabilities. Studies like Liu et al.’s reveal the right fusiform face area’s (rFFA) unique activation during this illusion, suggesting a top-down mechanism heavily involved in human face processing (Liu et al., 2014).

Face Pareidolia: Dr. A & Dr. B Part-2

17 minute read

Published:

Dr. A: In exploring the mechanisms of face perception, the distributed human neural system plays a crucial role. The distinction between invariant and changeable aspects of faces is fundamental, where invariant aspects underpin individual recognition, and changeable aspects facilitate social communication. This system is both hierarchical and distributed, involving the core and extended systems, with the fusiform gyrus and superior temporal sulcus being particularly instrumental in processing these aspects, respectively (Haxby, Hoffman, & Gobbini, 2000).

Face Pareidolia: Dr. A & Dr. B Part-1

14 minute read

Published:

Dr. A: Face pareidolia, the tendency to perceive faces where none actually exist, has long intrigued us, especially considering its wide variance among individuals. Liu-Fang Zhou and Ming Meng’s review on individual differences in face pareidolia highlights significant variance influenced by factors such as sex, developmental stages, personality traits, and neurodevelopmental factors (Zhou & Meng, 2020).

portfolio

Poster 1

Published:

The poster presents a study where task-optimized convolutional neural networks (CNNs) challenge the expertise hypothesis, suggesting that systems broadly optimized for object recognition provide a better foundation for learning fine-grained tasks like car discrimination than systems optimized for face recognition, thus questioning the computational viability of the expertise hypothesis.

Poster 2

Published:

The study explores face pareidolia, where humans see faces in random stimuli, using the DeepGaze model to compare its detection abilities with human gaze patterns, revealing DeepGaze’s potential in recognizing face-like patterns but also its limitations in fully capturing the nuances of human gaze behavior in face pareidolia.

publications

Digital image processing with deep learning for automated cutting tool wear detection

Published in Procedia Manufacturing, 2020

This study explores the application of deep learning in digital image processing for the detection of wear on cutting tools, with a focus on detection. Measurement is considered in the next paper.

Recommended citation: Bergs, T., Holst, C., Gupta, P., & Augspurger, T. (2020). "Digital image processing with deep learning for automated cutting tool wear detection." JProcedia Manufacturing, 48, 947–958.

Deep learning and rule-based image processing pipeline for automated metal cutting tool wear detection and measurement

Published in IFAC-PapersOnLine, 2022

This paper presents a digital and big data analytics approach to quantify metal cutting tool wear, employing a pipeline of deep learning for processing images and a rule-based method for measuring wear along the cutting edge. The automated system enables inline tool wear detection and measurement within CNC machining applications.

Recommended citation: Holst, C., Yavuz, T. B., Gupta, P., Ganser, P., & Bergs, T. (2022). "Deep learning and rule-based image processing pipeline for automated metal cutting tool wear detection and measurement." IFAC-PapersOnLine, 55(2), 534–539.

CNNs reveal the computational implausibility of the expertise hypothesis

Published in Iscience, 2023

This study challenges the expertise hypothesis suggesting face-specific brain mechanisms are domain-general, showing neural networks optimized for generic object categorization outperform those for face recognition in expert object discrimination. It highlights the computational implausibility of domain-general mechanisms being as effective as face-specific ones in specialized tasks.

Recommended citation: Kanwisher, N., Gupta, P., & Dobs, K. (2023). "CNNs reveal the computational implausibility of the expertise hypothesis." Iscience, 26(2).

Human-like face pareidolia emerges in deep neural networks optimized for face and object recognition [Under Review]

Published in PLoS computational biology [Under Review], 2023

Using deep convolutional neural networks (CNNs) and magnetoencephalography (MEG), this study investigates the neural basis of face pareidolia, showing that initial misidentification of faces in inanimate objects is a byproduct of the brain’s optimization for face and object recognition. The research reveals that while early stages of processing mistake pareidolia for real faces, this error is corrected in later stages through specialized face recognition optimization.

Recommended citation: Gupta, P., & Dobs, K. (2023). "Human-like face pareidolia emerges in deep neural networks optimized for face and object recognition [Under Review]."

Investigating face pareidolia using DeepGaze: Bridging human and artificial perception [In Preparation]

Published in [In Preparation], 2024

This study employs DeepGaze models to investigate face pareidolia, revealing their superior ability to detect face-like patterns over standard models and highlighting challenges in explaining gaze prediction complexity. Findings underscore the importance of dataset diversity and reveal nuances in modeling individual versus collective gaze patterns in understanding human visual perception.

Recommended citation: Gupta, P., & Dobs, K. (2024). "Investigating face pareidolia using deepgaze: Bridging human and artificial perception [In Preparation]."

talks

CNNs reveal the computational implausibility of the expertise hypothesis

Published:

In this workshop poster presentation, I discussed the use of convolutional neural networks (CNNs) to test the computational plausibility of the expertise hypothesis in visual recognition processes. The session included an in-depth analysis of how deep learning models can inform our understanding of visual cognition, emphasizing the parallels and distinctions between artificial and human perceptual capabilities.

CNNs reveal the computational implausibility of the expertise hypothesis

Published:

This poster presentation provided groundbreaking research using convolutional neural networks (CNNs) to challenge the expertise hypothesis within the field of visual perception, offering critical insights into the computational limits and possibilities of Fusiform Face Area (FFA).

teaching

Mentoring experience 1

Master Thesis, Justus Liebig University, FB-06 Department, 2022

Samuel Sander’s Master Thesis explores the inversion effects in humans and deep neural networks, examining how orientation affects object recognition in both. By comparing human performance with that of deep neural networks across various visual tasks, this work seeks to understand if neural networks can predict inversion effects in humans. Through methodological approaches involving the Ecoset dataset and different network architectures, the thesis finds significant inversion effects in both humans and neural networks, suggesting similarities in classification behaviors despite differences in error distributions under increased image distortion.

Mentoring experience 2

Bachelor Thesis, Justus Liebig University, FB-06 Department, 2023

Christine Huschens’ Bachelor Thesis focuses on a study of the inversion effects in the perception of faces, pareidolias, and objects in both biological and artificial networks, employing eye-tracking and DeepGaze saliency maps for comparison. The thesis covers the phenomenon of pareidolia—the tendency to perceive faces in everyday objects—and investigates how this phenomenon and face recognition are affected when images are inverted. It explores the neural processes involved in detecting faces and pareidolias, comparing human and artificial neural network responses to these stimuli. The study aims to understand the influence of image inversion on fixation patterns in humans and artificial networks, and how context (art vs. real objects) affects these patterns.