Robustness-aware Learning
Robustness-aware learning refers to training models in a way that enhances their performance and reliability in the presence of unexpected disturbances or anomalies.
Robustness-aware learning refers to training models in a way that enhances their performance and reliability in the presence of unexpected disturbances or anomalies.
A method that preserves structural constraints in model compression while optimizing for multiple scales.
A method for optimizing multi-layer perceptrons (MLPs) to enhance model efficiency across different configurations.
A technique used to reduce the size of a neural network by removing weights or neurons that contribute little to the model’s performance.
A framework for building reasoning-oriented large language models that incorporates multiple nested submodels optimized for different deployment configurations.
An algorithmic approach that aggregates votes from multiple sources in a way that is resilient to adversarial manipulation or noise.
An algorithm that partitions the vertices of a graph into two disjoint subsets while minimizing the number of edges between the subsets.
A metric that evaluates the effectiveness of a retrieval system based on the graded relevance of the retrieved documents.
A critical threshold in community detection problems that separates the regimes where community structure can be reliably detected from where it cannot.
A statistical technique aimed at reducing the variability of an estimator to improve the reliability of the results.