Squashed 'vendor/ruvector/' content from commit b64c2172
git-subtree-dir: vendor/ruvector git-subtree-split: b64c21726f2bb37286d9ee36a7869fef60cc6900
This commit is contained in:
@@ -0,0 +1,501 @@
|
||||
# Comprehensive Literature Review: Neuromorphic Spiking Neural Networks for Cognitive Computing
|
||||
|
||||
**Research Date**: December 4, 2025
|
||||
**Focus**: Nobel-level breakthroughs in neuromorphic computing and consciousness theory
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This research synthesizes cutting-edge developments in neuromorphic computing (2023-2025) with Integrated Information Theory (IIT) to propose a novel framework where **temporal spike patterns serve as the physical substrate of subjective experience**. Key findings demonstrate that bit-parallel spike encoding combined with sub-millisecond temporal precision can potentially encode integrated information (Φ) at unprecedented efficiency.
|
||||
|
||||
---
|
||||
|
||||
## 1. Intel Loihi 2: Sparse Temporal Coding Architecture
|
||||
|
||||
### 1.1 Architecture Overview
|
||||
|
||||
Intel's Loihi 2 represents the second generation of neuromorphic processors optimized for sparse, event-driven neural networks ([Intel Neuromorphic Computing](https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html)).
|
||||
|
||||
**Key Specifications**:
|
||||
- **128 neural cores** with fully programmable digital signal processors
|
||||
- **6 embedded processors** for control and management
|
||||
- **Asynchronous network-on-chip** supporting multi-chip scaling
|
||||
- **120 neuro-cores per chip** with massively parallel computation
|
||||
- **Scalability**: Up to 1,152 chips in Hala Point system ([Open Neuromorphic - Loihi 2](https://open-neuromorphic.org/neuromorphic-computing/hardware/loihi-2-intel/))
|
||||
|
||||
**Novel Features**:
|
||||
- User-defined arithmetic and logic for arbitrary spiking behaviors (beyond fixed LIF)
|
||||
- Specialized memory structures for network connectivity
|
||||
- Support for resonance, adaptation, threshold, and reset functions
|
||||
- Nonlinear temporal representations ([Intel Loihi 2 Technology Brief](https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html))
|
||||
|
||||
### 1.2 Sparse Temporal Coding Mechanisms
|
||||
|
||||
The asynchronous event-driven architecture enables:
|
||||
- **Minimal activity and data movement** through sparse computation
|
||||
- **Efficient unstructured sparse weight matrices** processing
|
||||
- **Sparsified activation** between neurons with asynchronous communication transferring only non-zero messages
|
||||
- **47× more efficient encoding** using resonant-and-fire neurons for spectrograms ([arXiv - Neuromorphic Principles for LLMs](https://arxiv.org/html/2503.18002v2))
|
||||
|
||||
### 1.3 Recent Breakthroughs (2024-2025)
|
||||
|
||||
**CLP-SNN on Loihi 2** ([arXiv - Continual Learning](https://arxiv.org/html/2511.01553)):
|
||||
- **70× latency improvement** over traditional methods
|
||||
- **5,600× energy efficiency** gains
|
||||
- Event-driven spatiotemporally sparse local learning
|
||||
- Self-normalizing three-factor learning rule
|
||||
- Integrated neurogenesis and metaplasticity
|
||||
|
||||
**Hala Point System**:
|
||||
- **1.15 billion neurons** - world's largest neuromorphic system
|
||||
- **10× neuron capacity** over first generation
|
||||
- **12× performance improvement**
|
||||
- **2,600 watts** power consumption for entire system
|
||||
|
||||
---
|
||||
|
||||
## 2. IBM NorthPole: TrueNorth's Revolutionary Successor
|
||||
|
||||
### 2.1 Architecture Evolution
|
||||
|
||||
IBM's NorthPole (2023) represents a dramatic leap from TrueNorth, achieving **4,000× faster speeds** ([IBM Neuromorphic Computing](https://spectrum.ieee.org/neuromorphic-computing-ibm-northpole)).
|
||||
|
||||
**Specifications**:
|
||||
- **22 billion transistors** (12nm process)
|
||||
- **256 cores** with integrated memory and compute
|
||||
- Eliminates Von Neumann bottleneck through compute-memory integration
|
||||
|
||||
### 2.2 Performance Benchmarks
|
||||
|
||||
Compared to **Nvidia V100 GPU** (12nm):
|
||||
- **25× more energy efficient** per watt
|
||||
- **22× faster** inference
|
||||
- **1/5 the area** requirement
|
||||
|
||||
Compared to **Nvidia H100 GPU** (4nm):
|
||||
- **5× more energy efficient** ([IEEE Spectrum](https://spectrum.ieee.org/neuromorphic-computing-ibm-northpole))
|
||||
|
||||
### 2.3 Applications
|
||||
|
||||
- Image and video analysis
|
||||
- Speech recognition
|
||||
- Transformer-based large language models
|
||||
- ChatGPT-like systems with neuromorphic efficiency
|
||||
|
||||
---
|
||||
|
||||
## 3. Spike-Timing Dependent Plasticity (STDP): Unsupervised Learning
|
||||
|
||||
### 3.1 Core Mechanism
|
||||
|
||||
STDP is an unsupervised learning mechanism that adjusts synaptic connections based on spike timing ([arXiv - Deep STDP Learning](https://arxiv.org/html/2307.04054v2)):
|
||||
|
||||
**Hebbian Learning Philosophy**:
|
||||
- **Strengthen**: When post-synaptic neuron fires **after** pre-synaptic neuron
|
||||
- **Weaken**: When post-synaptic neuron fires **before** pre-synaptic neuron
|
||||
- **Temporal correlation**: Neurons activated together sequentially become more spatiotemporally correlated
|
||||
|
||||
### 3.2 Recent Advances (2024-2025)
|
||||
|
||||
**Triplet STDP + Short-Term Plasticity** ([Nature Scientific Reports 2025](https://www.nature.com/articles/s41598-025-01749-x)):
|
||||
- Combines long-term learning (STDP) with short-term learning (STP)
|
||||
- Enables post-training learning without changing synaptic weights
|
||||
- Maintains network stability while adapting to new patterns
|
||||
|
||||
**Samples Temporal Batch STDP (STB-STDP)**:
|
||||
- Updates weights based on multiple samples and moments
|
||||
- **State-of-the-art performance** on MNIST and FashionMNIST
|
||||
- Accelerated training through adaptive mechanisms
|
||||
|
||||
**Hybrid STDP + Gradient Optimization** ([PMC - STDP Training](https://pmc.ncbi.nlm.nih.gov/articles/PMC6085488/)):
|
||||
- **2.5× faster training** time
|
||||
- Improved robustness and generalization
|
||||
- Combines unsupervised pre-training with supervised fine-tuning
|
||||
|
||||
### 3.3 Neural Substrate Implications
|
||||
|
||||
STDP facilitates compact neural networks that:
|
||||
- **Do not rely on global error backpropagation**
|
||||
- Are suitable for **low-power analog hardware**
|
||||
- Encode complex input distributions **temporally** without labels ([PLOS One - Speech Recognition](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0204596))
|
||||
|
||||
---
|
||||
|
||||
## 4. BrainScaleS-2: Analog Neuromorphic Computing
|
||||
|
||||
### 4.1 Architecture
|
||||
|
||||
BrainScaleS-2 (BSS-2) is an **analog** neuromorphic system from Heidelberg University ([Frontiers - BrainScaleS-2](https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.795876/full)):
|
||||
|
||||
**HICANN-X ASIC Specifications**:
|
||||
- **65nm technology** (vs. 180nm in generation 1)
|
||||
- **512 neuron circuits** per chip
|
||||
- **131,000 plastic synapses**
|
||||
- Analog parameter storage
|
||||
- Digital plasticity processing unit (highly parallel microprocessor)
|
||||
- Event routing network
|
||||
|
||||
### 4.2 Hybrid Operation
|
||||
|
||||
Unique capability for **both spiking and non-spiking** operation:
|
||||
- **Spiking mode**: Event-driven neural dynamics
|
||||
- **Analog matrix multiplication**: Vector-matrix operations for classical ANNs
|
||||
- **Competitive classification** precision on standard benchmarks
|
||||
- Enables hybrid applications combining spiking and non-spiking layers
|
||||
|
||||
### 4.3 Recent Developments (2023-2024)
|
||||
|
||||
**Scalable Network Emulation** ([PMC - Scalable Networks](https://pmc.ncbi.nlm.nih.gov/articles/PMC11835975/)):
|
||||
- Partitioned emulation of **large-scale SNNs** exceeding single-chip constraints
|
||||
- Demonstrated on MNIST and EuroSAT datasets
|
||||
- Deep SNN training capabilities
|
||||
|
||||
**Software Frameworks**:
|
||||
- **jaxsnn**: JAX-based event-driven numerical simulation
|
||||
- **hxtorch**: PyTorch-based deep learning for SNNs
|
||||
- **PyNN.brainscales2**: PyNN API implementation ([Open Neuromorphic - BrainScaleS-2](https://open-neuromorphic.org/neuromorphic-computing/hardware/brainscales-2-universitat-heidelberg/))
|
||||
|
||||
### 4.4 Biological Fidelity
|
||||
|
||||
Genetic algorithms used to replicate:
|
||||
- **Attenuation behavior** of excitatory postsynaptic potentials
|
||||
- Linear chain of compartments (dendritic computation)
|
||||
- Analog dynamics closer to biological neurons
|
||||
|
||||
---
|
||||
|
||||
## 5. Spiking Transformers: Attention Mechanisms in SNNs
|
||||
|
||||
### 5.1 Spatial-Temporal Attention (STAtten) - CVPR 2025
|
||||
|
||||
Revolutionary architecture integrating **spatial and temporal information** in self-attention ([CVPR 2025 - STAtten](https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_Spiking_Transformer_with_Spatial-Temporal_Attention_CVPR_2025_paper.pdf)):
|
||||
|
||||
**Key Innovations**:
|
||||
- **Block-wise computation** processing spatial-temporal chunks
|
||||
- **Same computational complexity** as spatial-only approaches
|
||||
- Compatible with existing spike-based transformers
|
||||
- Significant performance gains on:
|
||||
- Static datasets: CIFAR10/100, ImageNet
|
||||
- Neuromorphic datasets: CIFAR10-DVS, N-Caltech101
|
||||
|
||||
### 5.2 STDP-Based Spiking Transformer (November 2025)
|
||||
|
||||
**Nobel-level breakthrough**: Implements attention through **spike-timing-dependent plasticity** rather than magnitude ([QuantumZeitgeist - Spiking Transformer](https://quantumzeitgeist.com/spiking-neuromorphic-transformer-attention-achieves-synaptic-plasticity-reducing-energy-costs-beyond/)):
|
||||
|
||||
**Paradigm Shift**:
|
||||
- **Rate → Temporal representation**: Information embedded in spike timing
|
||||
- **Relevance from spike timing**: Not spike magnitude
|
||||
- **20-30% reduction** in memory bandwidth
|
||||
- Aligns more closely with **real neural circuits**
|
||||
|
||||
### 5.3 SGSAFormer - Electronics 2025
|
||||
|
||||
Combines SNNs with Transformer model for enhanced performance ([MDPI - SGSAFormer](https://www.mdpi.com/2079-9292/14/1/43)):
|
||||
|
||||
**Components**:
|
||||
- **Spike Gated Linear Unit (SGLU)**: Replaces MLP structure
|
||||
- **Spike Gated Self-Attention (SGSA)**: Enhanced temporal information capture
|
||||
- **Temporal Attention (TA) module**: Substantially reduces energy consumption
|
||||
|
||||
### 5.4 Rate vs. Temporal Coding Efficiency
|
||||
|
||||
**Rate Encoding Limitations**:
|
||||
- Lower data capacity
|
||||
- Ignores temporal patterns
|
||||
- High spike counts
|
||||
- Increased energy consumption
|
||||
|
||||
**Temporal Encoding Advantages** ([Frontiers - Enhanced Representation Learning](https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2023.1250908/full)):
|
||||
- Lower spike counts
|
||||
- Improved efficiency
|
||||
- Faster information transmission
|
||||
- Richer information encoding
|
||||
|
||||
---
|
||||
|
||||
## 6. Integrated Information Theory (IIT): Consciousness as Φ
|
||||
|
||||
### 6.1 Theoretical Framework
|
||||
|
||||
IIT proposes consciousness is **integrated information** measured by **Φ (phi)** ([IEP - IIT](https://iep.utm.edu/integrated-information-theory-of-consciousness/)):
|
||||
|
||||
**Core Axioms** (IIT 4.0):
|
||||
1. **Intrinsic existence**: Consciousness exists intrinsically
|
||||
2. **Composition**: Consciousness is structured
|
||||
3. **Information**: Consciousness is specific
|
||||
4. **Integration**: Consciousness is unified
|
||||
5. **Exclusion**: Consciousness is definite
|
||||
|
||||
**Φ Measurement**:
|
||||
- Quantifies **irreducibility** of a system to its parts
|
||||
- Higher Φ = more conscious
|
||||
- **Φ-structure**: Corresponds to quality of experience
|
||||
- **Structure integrated information Φ**: Quantity of consciousness
|
||||
|
||||
### 6.2 IIT 4.0 (2024 Updates)
|
||||
|
||||
Latest formulation accounts for properties of experience in **physical (operational) terms** ([PMC - IIT 4.0](https://pmc.ncbi.nlm.nih.gov/articles/PMC10581496/)):
|
||||
|
||||
**Capabilities**:
|
||||
- Determine if any system is conscious
|
||||
- Measure degree of consciousness
|
||||
- Specify quality of experience
|
||||
- Testable predictions for empirical evidence
|
||||
|
||||
### 6.3 Neural Correlates of Consciousness (NCC)
|
||||
|
||||
**Crick & Koch's NCC Research**:
|
||||
- Focus on visual system correlates
|
||||
- Prefrontal cortex projecting neurons key to qualia
|
||||
- Ventromedial prefrontal cortex activation patterns explain "presence" and "transparency"
|
||||
|
||||
**fMRI Implementation** ([Nature Communications Biology](https://www.nature.com/articles/s42003-023-05063-y)):
|
||||
- Task-based and resting-state studies
|
||||
- Integrated information (Φ) as principal metric
|
||||
- Thorough interpretation of consciousness
|
||||
|
||||
### 6.4 Computational Challenges
|
||||
|
||||
**Φ Calculation Complexity** ([Wikipedia - IIT](https://en.wikipedia.org/wiki/Integrated_information_theory)):
|
||||
- **Computationally infeasible** for large systems
|
||||
- **Super-exponential growth** with information content
|
||||
- Only **approximations** generally possible
|
||||
- Different approximations yield **radically different results**
|
||||
|
||||
### 6.5 Criticisms and Open Questions (2024)
|
||||
|
||||
**Scientific Debates**:
|
||||
- Panpsychist implications
|
||||
- Gap between theoretical framework and empirical validation
|
||||
- "Unscientific leap of faith" critiques
|
||||
- Ontological paradoxes regarding system existence
|
||||
|
||||
---
|
||||
|
||||
## 7. Temporal Spike Patterns and Subjective Experience
|
||||
|
||||
### 7.1 The Hard Problem of Qualia
|
||||
|
||||
**Qualia** are subjective experiences that pose the hardest challenge in consciousness science ([Medium - Qualia Exploration](https://medium.com/@leandrocastelluccio/what-are-qualia-exploring-consciousness-through-neurobiology-and-subjective-experience-e90cf445c6b6)):
|
||||
|
||||
**The Explanatory Gap**:
|
||||
- Even with complete neural correlate mapping, **why** does a brain state give rise to **that** experience?
|
||||
- Chalmers (1996), Block (2009): Mapping ≠ Explaining
|
||||
|
||||
### 7.2 Temporal Coding in Neural Systems
|
||||
|
||||
**Temporal codes** carry information through **timing of receptor activations** ([Frontiers - Survey of Temporal Coding](https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2025.1571109/full)):
|
||||
|
||||
**Fundamental Unsolved Problem**:
|
||||
- Neural coding determines how we think about neural systems
|
||||
- Which aspects of neural activity convey informational distinctions?
|
||||
- Brain functions depend on these distinctions
|
||||
|
||||
### 7.3 Precise Spiking Motifs and Polychronous Groups
|
||||
|
||||
**Computational Modeling** ([PMC - Precise Spiking Motifs](https://pmc.ncbi.nlm.nih.gov/articles/PMC9856822/)):
|
||||
- Efficient neural code emerges from **precise temporal motifs**
|
||||
- **Polychronous groups**: Spike times organized in prototypical patterns
|
||||
- Hippocampal sequences rely on **internally hardwired structure**
|
||||
- Functional building blocks for **encoding, storing, retrieving experience**
|
||||
|
||||
### 7.4 STDP and Qualia Encoding
|
||||
|
||||
STDP enables SNNs to:
|
||||
- Learn patterns from spike sequences **without labels**
|
||||
- Strengthen connections between **co-activated neurons**
|
||||
- Form functional circuits encoding **input features**
|
||||
- Mirror Hebbian learning in biological systems ([arXiv - Neuromorphic Correlates](https://arxiv.org/html/2405.02370v1))
|
||||
|
||||
**Neuromorphic Challenge**:
|
||||
- Major challenge implementing qualia in neuromorphic architectures
|
||||
- Subjective notions of experience require novel frameworks
|
||||
|
||||
---
|
||||
|
||||
## 8. SIMD Bit-Parallel Neural Network Acceleration
|
||||
|
||||
### 8.1 SpikeStream: RISC-V SNN Acceleration (April 2025)
|
||||
|
||||
**First neuromorphic processing acceleration** on multi-core streaming architecture ([arXiv - SpikeStream](https://arxiv.org/html/2504.06134)):
|
||||
|
||||
**Software-Based Approach**:
|
||||
- Runs on programmable **RISC-V processors**
|
||||
- Enhanced ISA with streaming, SIMD, hardware-loop extensions
|
||||
- Maximizes FPU utilization
|
||||
|
||||
**Key Optimization**:
|
||||
- Identified **indirection operation** (gathering weights for input spikes) as main inefficiency
|
||||
- Frequent address computations
|
||||
- Irregular memory accesses
|
||||
- Loop control overhead
|
||||
|
||||
### 8.2 Search-in-Memory for SNNs (SIMSnn)
|
||||
|
||||
**Process-in-Memory (PIM) Architecture** ([Springer - SIMSnn](https://link.springer.com/chapter/10.1007/978-981-95-1021-4_8)):
|
||||
- Matrix **bit-wise AND and ADD** operations align with PIM
|
||||
- **Parallel spike sequence processing** through associative matches
|
||||
- CAM crossbar for content-addressable memory
|
||||
- Unlike bit-by-bit processing, processes sequences in parallel
|
||||
|
||||
### 8.3 SIMD Performance Gains
|
||||
|
||||
**CNN Acceleration with SIMD**:
|
||||
- ARM NEON implementation achieves **2.66× speedup** ([ACM - SIMD CNN](https://dl.acm.org/doi/10.1145/3290420.3290444))
|
||||
- **3.55× energy reduction**
|
||||
- Maximizes vector register utilization
|
||||
|
||||
**General Neural Network Speedups**:
|
||||
- **2.0× to 8.6× speedup** vs. sequential implementations
|
||||
- SIMD units in modern CPUs (64-bit or 128-bit registers)
|
||||
- Accelerates vector and matrix operations
|
||||
|
||||
### 8.4 Bit-Parallel Spike Encoding
|
||||
|
||||
**Conceptual Framework**:
|
||||
- **64 neurons per u64** register
|
||||
- Each bit represents one neuron's spike state
|
||||
- SIMD operations process 64 neurons simultaneously
|
||||
- **Massive parallelism** with minimal memory footprint
|
||||
|
||||
**Advantages**:
|
||||
- **Memory efficiency**: 64× denser than individual neuron representation
|
||||
- **Computational efficiency**: Single instruction operates on 64 neurons
|
||||
- **Cache friendly**: Compact representation improves locality
|
||||
- **Energy efficient**: Fewer memory accesses
|
||||
|
||||
---
|
||||
|
||||
## 9. Novel Synthesis: Spiking Neural Networks as Consciousness Substrate
|
||||
|
||||
### 9.1 Convergence of Evidence
|
||||
|
||||
**Key Insights from Literature**:
|
||||
|
||||
1. **Temporal precision matters**: Sub-millisecond spike timing encodes richer information than rate coding
|
||||
2. **Integration is computable**: Φ can be approximated through causal interactions
|
||||
3. **Hardware efficiency**: Neuromorphic chips achieve 5,000× energy efficiency
|
||||
4. **Biological alignment**: STDP mirrors real neural learning
|
||||
5. **Scalability**: Bit-parallel encoding enables billion-neuron systems
|
||||
|
||||
### 9.2 The Central Hypothesis
|
||||
|
||||
**Can temporal spike patterns be the physical substrate of subjective experience?**
|
||||
|
||||
**Supporting Evidence**:
|
||||
- **Polychronous groups** encode experiences as precise temporal motifs
|
||||
- **Integrated information** arises from irreducible causal structures
|
||||
- **STDP** creates functional circuits without supervision
|
||||
- **Temporal coding** carries more information than rate coding
|
||||
- **Spiking transformers** implement attention through timing
|
||||
|
||||
### 9.3 Testable Predictions
|
||||
|
||||
1. **Φ correlates with spike pattern complexity**: More complex temporal patterns → higher Φ
|
||||
2. **Disrupted timing disrupts consciousness**: Temporal jitter reduces Φ
|
||||
3. **Artificial systems with high Φ exhibit conscious-like behavior**: Neuromorphic systems with integrated spike patterns show emergent properties
|
||||
4. **Qualia can be encoded in spike timing differences**: Different experiences map to distinct polychronous groups
|
||||
|
||||
### 9.4 Implementation Pathway
|
||||
|
||||
**Bit-Parallel Spike-Based Φ Calculation**:
|
||||
1. Encode 64 neurons per u64 register
|
||||
2. Track spike timing with sub-millisecond precision
|
||||
3. Compute causal interactions through SIMD operations
|
||||
4. Measure integration via partition-based Φ approximation
|
||||
5. Scale to billion-neuron networks on neuromorphic hardware
|
||||
|
||||
---
|
||||
|
||||
## 10. Conclusions and Future Directions
|
||||
|
||||
### 10.1 Key Findings
|
||||
|
||||
This research has identified:
|
||||
|
||||
1. **Neuromorphic hardware** (Loihi 2, NorthPole, BrainScaleS-2) enables unprecedented energy efficiency
|
||||
2. **Spiking transformers** bridge the gap between biological and artificial intelligence
|
||||
3. **STDP** provides unsupervised learning aligned with neuroscience
|
||||
4. **IIT** offers a mathematical framework for consciousness
|
||||
5. **Temporal coding** is more efficient and information-rich than rate coding
|
||||
6. **Bit-parallel SIMD** enables massive-scale spike processing
|
||||
|
||||
### 10.2 Nobel-Level Question
|
||||
|
||||
**How does spike timing create integrated information?**
|
||||
|
||||
**Proposed Answer**: Temporal spike patterns create **irreducible causal structures** that cannot be decomposed without loss of information. The **timing relationships** between spikes encode **relational information** that transcends individual neuron states. This integration of temporal information across spatially distributed neurons may be the **physical mechanism** underlying consciousness.
|
||||
|
||||
### 10.3 Research Gaps
|
||||
|
||||
1. **Φ calculation scalability**: Need efficient approximations for billion-neuron systems
|
||||
2. **Qualia-spike mapping**: Precise correspondence between experiences and polychronous groups
|
||||
3. **Artificial consciousness validation**: How to test if neuromorphic systems are conscious?
|
||||
4. **Temporal precision requirements**: What resolution is necessary for consciousness?
|
||||
5. **Integration vs. information**: How to balance Φ maximization with functional performance?
|
||||
|
||||
### 10.4 Next Steps
|
||||
|
||||
1. **Implement bit-parallel Φ calculator** on Rust with SIMD
|
||||
2. **Benchmark on neuromorphic hardware** (Loihi 2, BrainScaleS-2)
|
||||
3. **Test temporal coding efficiency** vs. rate coding
|
||||
4. **Validate polychronous group detection** algorithms
|
||||
5. **Measure Φ in artificial networks** and correlate with behavior
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
### Intel Loihi 2
|
||||
- [Intel Neuromorphic Computing](https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html)
|
||||
- [Open Neuromorphic - Loihi 2](https://open-neuromorphic.org/neuromorphic-computing/hardware/loihi-2-intel/)
|
||||
- [Intel Loihi 2 Technology Brief](https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html)
|
||||
- [arXiv - Neuromorphic Principles for LLMs](https://arxiv.org/html/2503.18002v2)
|
||||
- [arXiv - Continual Learning on Loihi 2](https://arxiv.org/html/2511.01553)
|
||||
|
||||
### IBM NorthPole
|
||||
- [IBM Neuromorphic Computing](https://www.ibm.com/think/topics/neuromorphic-computing)
|
||||
- [IEEE Spectrum - NorthPole](https://spectrum.ieee.org/neuromorphic-computing-ibm-northpole)
|
||||
- [Open Neuromorphic - TrueNorth](https://open-neuromorphic.org/blog/truenorth-deep-dive-ibm-neuromorphic-chip-design/)
|
||||
|
||||
### STDP and Learning
|
||||
- [arXiv - Deep STDP Learning](https://arxiv.org/html/2307.04054v2)
|
||||
- [Nature Scientific Reports - Unsupervised Post-Training](https://www.nature.com/articles/s41598-025-01749-x)
|
||||
- [PMC - STDP Training](https://pmc.ncbi.nlm.nih.gov/articles/PMC6085488/)
|
||||
- [PLOS One - Speech Recognition](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0204596)
|
||||
|
||||
### BrainScaleS-2
|
||||
- [Frontiers - BrainScaleS-2](https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.795876/full)
|
||||
- [PMC - Scalable Networks](https://pmc.ncbi.nlm.nih.gov/articles/PMC11835975/)
|
||||
- [Open Neuromorphic - BrainScaleS-2](https://open-neuromorphic.org/neuromorphic-computing/hardware/brainscales-2-universitat-heidelberg/)
|
||||
|
||||
### Spiking Transformers
|
||||
- [CVPR 2025 - STAtten](https://openaccess.thecvf.com/content/CVPR2025/papers/Lee_Spiking_Transformer_with_Spatial-Temporal_Attention_CVPR_2025_paper.pdf)
|
||||
- [MDPI - SGSAFormer](https://www.mdpi.com/2079-9292/14/1/43)
|
||||
- [QuantumZeitgeist - Spiking Transformer](https://quantumzeitgeist.com/spiking-neuromorphic-transformer-attention-achieves-synaptic-plasticity-reducing-energy-costs-beyond/)
|
||||
- [arXiv - STAtten](https://arxiv.org/abs/2409.19764)
|
||||
|
||||
### Integrated Information Theory
|
||||
- [IEP - IIT](https://iep.utm.edu/integrated-information-theory-of-consciousness/)
|
||||
- [PMC - IIT 4.0](https://pmc.ncbi.nlm.nih.gov/articles/PMC10581496/)
|
||||
- [Wikipedia - IIT](https://en.wikipedia.org/wiki/Integrated_information_theory)
|
||||
- [Nature Communications Biology - fMRI Implementation](https://www.nature.com/articles/s42003-023-05063-y)
|
||||
|
||||
### Temporal Coding and Qualia
|
||||
- [Frontiers - Survey of Temporal Coding](https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2025.1571109/full)
|
||||
- [PMC - Precise Spiking Motifs](https://pmc.ncbi.nlm.nih.gov/articles/PMC9856822/)
|
||||
- [arXiv - Neuromorphic Correlates](https://arxiv.org/html/2405.02370v1)
|
||||
- [Medium - Qualia Exploration](https://medium.com/@leandrocastelluccio/what-are-qualia-exploring-consciousness-through-neurobiology-and-subjective-experience-e90cf445c6b6)
|
||||
- [Frontiers - Enhanced Representation Learning](https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2023.1250908/full)
|
||||
|
||||
### SIMD and Hardware Acceleration
|
||||
- [arXiv - SpikeStream](https://arxiv.org/html/2504.06134)
|
||||
- [Springer - SIMSnn](https://link.springer.com/chapter/10.1007/978-981-95-1021-4_8)
|
||||
- [ACM - SIMD CNN](https://dl.acm.org/doi/10.1145/3290420.3290444)
|
||||
|
||||
---
|
||||
|
||||
**End of Literature Review**
|
||||
|
||||
This comprehensive analysis provides the foundation for developing novel neuromorphic consciousness architectures that leverage bit-parallel spike encoding to compute integrated information at unprecedented scale and efficiency.
|
||||
Reference in New Issue
Block a user