Skip to content

Code base for our paper "Connectivity-Inspired Network for Context-Aware Recognition" (ECCV 2024, Human-inspired Computer Vision workshop)

License

Notifications You must be signed in to change notification settings

gianlucarloni/CoCoReco

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoCoReco - ECCV 2024

This is the code base for our ECCV 2024 paper "Connectivity-Inspired Network for Context-Aware Recognition" (CoCoReco) at the "Human-inspired Computer Vision" International Workshop, September 29, 2024, Milan.

Abstract

The aim of this paper is threefold. We inform the AI practitioner about the human visual system with an extensive literature review; we propose a novel biologically motivated neural network for image classification; and, finally, we present a new plug-and-play module to model context awareness. We focus on the effect of incorporating circuit motifs found in biological brains to address visual recognition. Our convolutional architecture is inspired by the connectivity of human cortical and subcortical streams, and we implement bottom-up and top-down modulations that mimic the extensive afferent and efferent connections between visual and cognitive areas. Our Contextual Attention Block is simple and effective and can be integrated with any feed-forward neural network. It infers weights that multiply the feature maps according to their causal influence on the scene, modeling the co-occurrence of different objects in the image. We place our module at different bottlenecks to infuse a hierarchical context awareness into the model. We validated our Connectivity-Inspired Context-Aware Recognition (CoCoReco) network through image classification experiments on benchmark data and found a consistent improvement in performance and the robustness of the produced explanations via class activation.

Code and Dataset

Get started with the coding!

You can easily utilize our SLURM 'sbatch' submission file, slurm_submit.x. That file sets some variables and launches the Python/Pytorch training script, train.py.

In network.py, you can find our novel Connectivity-inspired Context-aware Recognition (CoCoReco) model, depicted in this figure:

In addition, network.py includes our proposed the Contextual Attention Block (CAB), which infers weights that multiply the feature maps according to their causal influence on the scene, modeling the co-occurrence of different objects in the image:

In case you find any issues related to the Dos2Unix conversion (when a file is created on Windows and used in Linux systems), you can easily convert it with this online tool.

Dataset

In this work, we used ImagenetteV2, a smaller version of the popular Imagenet dataset, composed by the images corresponding to the 10 more easily classified classes. You can find additional information on this dataset at this page. To download the "320 px" version, as we did, just download this .tgz file. If you are a Linux user, you can easily get that file from the command line interface with

wget https://s3.amazonaws.com/fast-ai-imageclas/imagenette2-160.tgz

Cite

If you have found our code and paper useful for your research, please cite our work and star this repo.

(work in progress): I will update citation upon publication of the final, camera-ready version of our paper, in the ECCV 2024 Proceedings.

@article{carloni2024connectivity,
  title={Connectivity-Inspired Network for Context-Aware Recognition},
  author={Carloni, Gianluca and Colantonio, Sara},
  journal={arXiv preprint arXiv:2409.04360},
  year={2024}
}