RobustDG
RobustDG is a library of ML models that generalize to unseen domains and provides evaluation metrics on accuracy and multiple robustness metrics.
Discover an index of datasets, SDKs, APIs and open-source tools developed by Microsoft researchers and shared with the global academic community below. These experimental technologies—available through Azure AI Foundry Labs (opens in new tab)—offer a glimpse into the future of AI innovation.
RobustDG is a library of ML models that generalize to unseen domains and provides evaluation metrics on accuracy and multiple robustness metrics.
The Freta library for Python enables access to Project Freta (opens in new tab), a service used to inspect volatile memory images. Included in this library is a utility, freta, which provides command line access to the Project Freta (opens…
Part of the AI for Earth Land Cover Mapping project, this repository holds both the frontend web-application and backend server that make up our “Land Cover Mapping” tool.
WMG is a transformer-based RL agent that attends to a dynamic set of vectors representing observed and recurrent state.
TartanAir dataset: AirSim Simulation Dataset for Simultaneous Localization and Mapping theairlab.org/tartanair-dataset (opens in new tab)
Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020.
This repository contains information about the cross-lingual evaluation benchmark XGLUE, which is composed of 11 tasks spans 19 languages.
GitHub Publication Publication Publication Publication Publication
This repository contains the code and dataset for the following paper: Differentially Private Set Union with Applications to Vocabulary Generation
This repository contains a representative subset of the first-party DNN training workloads on Microsoft’s internal Philly clusters. The trace is a sanitized subset of the workload described in “Analysis of Large-Scale Multi-Tenant GPU Clusters for DNN…
DeBERTa (Decoding-enhanced BERT with disentangled attention) improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content…