The NLP Index

Magical Software âš¡

Total repos: 419
hits:
time: ms
Added Title Paper Code
4/26/2021
PointerGenerator network implementation in AllenNLP.
4/26/2021
PyTorch implementation: Proximal Policy Optimization (PPO) for playing Super Mario Bros.
4/26/2021
The first release candidate for UMAP 0.4 is out providing lots of new features, including performance improvements, embedding to different manifolds, inverse transform, and plotting tools.
4/26/2021
DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills
4/26/2021
Quantum GAN with Hybrid Generator : PennyLane and Pytorch implementation of QGAN-HG: Quantum generative models for small molecule drug discovery, based on MolGAN
4/26/2021
tsaug is a Python package for time series augmentation. It offers a set of augmentation methods for time series, as well as a simple API to connect multiple augmenters into a pipeline
4/26/2021
Learning the Graphical Structure of Electronic Health Records with Graph Convolutional Transformer
4/26/2021
MMDetection, an object detection toolbox that contains a rich set of object detection and instance segmentation methods as well as related components and modules.
4/26/2021
Graphbrain : Automated meaning extraction and text understanding. The Semantic Hypergraph is central to Graphbrain, both conceptually and functionally. It can be seen from three different perspectives:
4/26/2021
PyGLN Gated Linear Network (GLN implementations for NumPy, PyTorch, TensorFlow and JAX : A new family of neural networks introduced by DeepMind
4/26/2021
With 4.5B parallel sentences in 576 language pairs, CCMatrix is the largest data set of high-quality, web-based bitexts for training translation models
4/26/2021
Salesforce : Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering. Questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question.
4/26/2021
AI is heading over into the Adobe core products! Like style transfer via GANs (Generative Adversarial Networks) super amazing to see this. Now everyone can easily use Machine Learning to bring his creativity to new levels
4/26/2021
PEGASUS (Google AI): Pre-training with Extracted Gap-sentences for Abstractive Summarization now available on the Hugging Face model hub for super easy integration into your NLP workflow!
4/26/2021
Google Research : PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. We designed a pre-training self-supervised objective (called gap-sentence generation) for Transformer encoder-decoder models to improve fine-tuning performance on abstractive summarization, achieving state-of-the-art results on 12 diverse summarization datasets.
4/26/2021
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs : Recent graph-to-text models generate text from graph-based data using either global or local aggregation to learn node representations.
4/26/2021
Google AI : BLEURT is an evaluation metric for Natural Language Generation. It takes a pair of sentences as input, a reference and a candidate, and it returns a score that indicates to what extent the candidate is grammatical and conveys the mearning of the reference. It is comparable to sentence-BLEU and BERTscore.
4/26/2021
Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
4/26/2021
A Transformer-based Approach for Source Code Summarization
4/26/2021
Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting : Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT)
4/26/2021
TextAttack : A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP.
4/26/2021
TextAttack is a library for running adversarial attacks against natural language processing (NLP) models. TextAttack builds attacks from four components: a search method, goal function, transformation, and a set of constraints.
4/26/2021
Lip2wav : Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis (CVPR, 2020)
4/26/2021
Facebook AI : TaBERT a pre-trained language model for learning joint representations of natural language utterances and structured tables for semantic parsing.
4/26/2021
BERTweet: A pre-trained language model for English Tweets
4/26/2021
Stochastic Sequence Propagation - A Keras Model for optimizing DNA, RNA and protein sequences based on a predictor: Protein sequence optimization using activation maximization and logit normalization.
4/26/2021
Goolge AI - REALM is a method for augmenting neural networks with a knowledge retrieval mechanism. If a question answering neural network is given a question like "What is the angle of an equilateral triangle?", it could retrieve a wikipedia page to determine the answer
4/26/2021
ParsBERT: Transformer-based Model for Persian Language Understanding.This model is pre-trained on a large Persian corpus with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 2M documents. A large subset of this corpus was crawled manually.
4/26/2021
A Corpus for Large-Scale Phonetic Typology. A major hurdle in data-driven research on typology is having sufficient data in many languages to draw meaningful conclusions. We present VoxClamantis v1.0, the first large-scale corpus for phonetic typology, with aligned segments and estimated phoneme-level labels in 690 readings spanning 635 languages, along with acoustic-phonetic measures of vowels and sibilants.
4/26/2021
GPT-3 paper just got released! GPT-3 is an autoregressive language model and trained with 175 billion parameters, 10x more than any previous non-sparse language model before.
4/26/2021
Stopwords in Technical Language Processing : Rigorously identifying generic, insignificant, uninformative stopwords in engineering texts and curating a stopword list for technical language processing applications.
4/26/2021
Little Ball of Fur is a graph sampling extension library for Python - Little Ball of Fur consists of methods that can sample from graph structured data. To put it simply it is a Swiss Army knife for graph sampling tasks. First, it includes a large variety of vertex, edge, and exploration sampling techniques. Second, it provides a unified application public interface which makes the application of sampling algorithms trivial for end-users.
4/26/2021
Does your AI sounds like humans? FastSpeech 2: Fast and High-Quality End-to-End Text-to-Speech now with implementation and code
4/26/2021
Microsoft AI - FastSpeech 2: Fast and High-Quality End-to-End Text-to-Speech
4/26/2021
CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training. Two important tasks at the intersection of knowledge graphs and natural language processing are graph-to-text (G2T) and text-to-graph (T2G) conversion.
4/26/2021
Check this supreme paper review! This paper suggests an approximate way of calculating self-attention in Transformer architectures that has linear space and time complexity in terms of the sequence length, with the resulting performance on benchmark datasets similar to that of the RoBERTa model based on the original Transformers with much less efficient quadratic attention complexity.
4/26/2021
audino : A Modern Annotation Tool for Audio and Speech. It allows annotators to define and describe temporal segmentation in audios. These segments can be labelled and transcribed easily using a dynamically generated form. An admin can centrally control user roles and project assignment through the admin dashboard. The dashboard also enables describing labels and their values. The annotations can easily be exported in JSON format for further processing.
4/26/2021
How to Avoid Being Eaten by a Grue: Structured Exploration Strategies for Textual Worlds : We introduce Q*BERT, an agent that learns to build a knowledge graph of the world by answering questions, which leads to greater sample efficiency.
4/26/2021
Facebook Research : Open-Domain Conversational Agents: Current Progress, Open Problems, and Future Directions with Datasets
4/26/2021
A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity Rewards : Cross-lingual text summarization aims at generating a document summary in one language given input in another language.
4/26/2021
Google Research : Scalable Deep Generative Modeling for Sparse Graphs: Current deep neural methods suffer from limited scalability: for a graph with n nodes and m edges, existing deep neural methods require Ω(n2) complexity by building up the adjacency matrix.
4/26/2021
BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision in Natural Language Processing - BOND which leverages the power of pre-trained language models (e.g., BERT and RoBERTa) to improve the prediction performance of NER models.
4/26/2021
Linear Attention Transformer : Transformer based on a variant of attention that is linear complexity in respect to sequence length
4/26/2021
Sparsely Gated Mixture of Experts : Parallel computation patterns with minimal changes to the existing model code. Scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-of-Experts beyond 600 billion parameters using automatic sharding.
4/26/2021
That is a Known Lie: Detecting Previously Fact-Checked Claims : Overview of CheckThat! 2020: Automatic Identification and Verification of Claims in Social Media
4/26/2021
A avatar bot reading news articles reaching 150K follower? Xiaomingbot a visual avatar to read generated news using BERT with facial expression and lip motion.
4/26/2021
SketchGraphs is a dataset of 15 million sketches extracted from real-world CAD models coupled with their geometric constraint graphs
4/26/2021
Stanza extended with first domain-specific #NLP models for biomedical and clinical medical English.
4/26/2021
DeLighT: Very Deep and Light-weight Transformers - Source code and paper. DeLight delivers similar or better performance than transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both within each Transformer block using DExTra, a deep and light-weight transformation and across blocks using block-wise scaling, that allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations.
4/26/2021
A Multilingual Neural Machine Translation Model for Biomedical Data - The model can translate from 5 languages: French, German, Italian, Korean and Spanish into English. It is trained with large amounts of generic and biomedical data, using domain tags.
4/26/2021
COOKIE: A Dataset for Conversational Recommendation over Knowledge Graphs in E-commerce - A new dataset for conversational recommendation over knowledge graphs in e-commerce platforms.
4/26/2021
Document-level Event-based Extraction Using Generative Template-filling Transformers for NLP Tasks - Classic information extraction problem of document-level template filling. Sentence-level approaches are ill-suited to the task and introduce a generative transformer-based encoder-decoder framework that is designed to model context at the document level: it can make extraction decisions across sentence boundaries; is \emph{implicitly} aware of noun phrase coreference structure, and has the capacity to respect cross-role dependencies in the template structure.
4/26/2021
Top2Vec is an algorithm for topic modeling and semantic search. It automatically detects topics present in text and generates jointly embedded topic, document and word vectors.
4/26/2021
A Fast and Robust BERT-based Dialogue State Tracker for Schema-Guided Dialogue Dataset - Dialog State Tracking (DST) is one of the most crucial modules for goal-oriented dialogue systems. They introduce FastSGT (Fast Schema Guided Tracker) a fast and robust BERT-based model for state tracking in goal-oriented dialogue systems
4/26/2021
Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity - For human-like agents, including virtual avatars and social robots, making proper gestures while speaking is crucial in human--agent interaction. Co-speech gestures enhance interaction experiences and make the agents look alive.
4/26/2021
Generative Language Modeling for Automated Theorem Proving - Transformer-based language models as automated prover and proof assistant, GPT-f, for the Metamath formalization language. GPT-f found new short proofs that were accepted into the main Metamath library.
4/26/2021
QED: A Framework and Dataset for Explanations in Question Answering - A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.
4/26/2021
GeDi: A Powerful New Method for Controlling Language Models - use smaller language models as generative classifiers to guide generation from larger language models. This method can make generations friendlier, reduce bias and toxicity, and achieve zero-shot controllable generation of unseen topics.
4/26/2021
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners - Out-performing GPT-3 with only 223m parameters? Using Pattern Exploiting Training (PET)
4/26/2021
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining - There is an increasing focus on model-based dialog evaluation metrics such as ADEM, RUBER, and the more recent BERT-based metrics.
4/26/2021
MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems - A simple yet effective transfer learning framework, which allows us to plug-and-play pre-trained seq2seq models, and jointly learn dialogue state tracking and dialogue response generation.
4/26/2021
Rethinking Attention with Performers - Transformer models have achieved state-of-the-art results across a diverse range of domains, including natural language, conversation, images, and even music. The core block of every Transformer architecture is the attention module, which computes similarity scores for all pairs of positions in an input sequence. This however, scales poorly with the length of the input sequence, requiring quadratic computation time to produce all similarity scores, as well as quadratic memory size to construct a matrix to store these scores
4/26/2021
An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random features approach (FAVOR+)
4/26/2021
GENRE (Generarive ENtity REtrieval) by Facebook Research : A sequence-to-sequence approach to entity retrieval, based on fine-tuned BART architecture.
4/26/2021
SentAugment is a data augmentation technique for semi-supervised learning in NLP. It uses state-of-the-art sentence embeddings to structure the information of a very large bank of sentences.
4/26/2021
CoRefi stand alone suite for Coreference Anntoation - Coreference Resolution is the task of clustering words and names that refer to the same concept, entity or event. Coreference is an important NLP task for downstream applications such as abstractive summarization, reading comprehension, and information extraction.
4/26/2021
MolDesigner: Interactive Design of Efficacious Drugs with Deep Learning
4/26/2021
Dual Inference for Improving Language Understanding and Generation - Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite.
4/26/2021
BERT2DNN: BERT Distillation with Massive Unlabeled Data for Online E-Commerce Search - Relevance has significant impact on user experience and business profit for e-commerce search platform. In this work, they propose a data-driven framework for search relevance prediction, by distilling knowledge from BERT and related multi-layer Transformer teacher models into simple feed-forward networks with large amount of unlabeled data.
4/26/2021
Bort is an optimal subset of architectural parameters for the BERT architecture, extracted by applying a fully polynomial-time approximation scheme (FPTAS) for neural architecture search. Bort has an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size.
4/26/2021
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks - Graph Neural Networks (GNNs) are the predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance.
4/26/2021
Fixed-Length Protein Embeddings using Contextual Lenses - Protein database search tools such as BLAST are instrumental for research in life sciences but they are slow and based on surface-level sequence similarity.
4/26/2021
MTLB-STRUCT @PARSEME 2020: Capturing Unseen Multiword Expressions Using Multi-task Learning and Pre-trained Masked Language Models - This paper describes a semi-supervised system that jointly learns verbal multiword expressions (VMWEs) and dependency parse trees as an auxiliary task.
4/26/2021
MK-SQuIT: Synthesizing Questions using Iterative Template-filling - The aim of this work is to create a framework for synthetically generating question/query pairs with as little human input as possible. These datasets can be used to train machine translation systems to convert natural language questions into queries, a useful tool that could allow for more natural access to database information.
4/26/2021
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification - A dataset for many-hop evidence extraction and fact verification. It challenges models to extract facts from several Wikipedia articles that are relevant to a claim and classify whether the claim is Supported or Not-Supported by the facts.
4/26/2021
Interpretable Multi-dataset Evaluation for Named Entity Recognition - With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.
4/26/2021
Strongly Generalizable Question Answering (GrailQA) is a new large-scale, high-quality dataset for question answering on knowledge bases (KBQA) on Freebase with 64,331 questions annotated with both answers and corresponding logical forms in different syntax (i.e., SPARQL, S-expression, etc.). It can be used to test three levels of generalization in KBQA: i.i.d., compositional, and zero-shot.
4/26/2021
Design Space for Graph Neural Networks - The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, as opposed to studying the more general design space of GNNs that consists of a Cartesian product of different design dimensions, such as the number of layers or the type of the aggregation function. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. Here they define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks.
4/26/2021
Molecular representation learning with language models and domain-relevant auxiliary tasks - A Transformer architecture, specifically BERT, to learn flexible and high quality molecular representations for drug discovery problems.
4/26/2021
A Generalization of Transformer Networks to Graphs - A generalization of transformer neural network architecture for arbitrary graphs.
4/26/2021
Domain specific BERT representation for Named Entity Recognition of lab protocol - Supervised models trained to predict properties from representations have been achieving high accuracy on a variety of tasks.
4/26/2021
On Generating Extended Summaries of Long Documents - A new method for generating extended summaries of long papers. Their method exploits hierarchical structure of the documents and incorporates it into an extractive summarization model through a multi-task learning approach.
4/26/2021
Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection - A first-of-its-kind large synthetic training dataset for online hate classification, created from scratch with trained annotators over multiple rounds of dynamic data collection.
4/26/2021
TextBox: A Unified, Modularized, and Extensible Framework for Text Generation : A unified, modularized, and extensible text generation framework. TextBox aims to support a broad set of text generation tasks and models
4/26/2021
Trankit: A Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
4/26/2021
ArtEmis: Affective Language for Visual Art : A novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language.
4/26/2021
Efficient-CapsNet: Capsule Network with Self-Attention Routing : Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations.
4/26/2021
TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-population, and their combinations to provide a comprehensive robustness analysis.
4/26/2021
Layout Parser : A Python Library for Document Layout Understanding
4/26/2021
LoFTR: Detector-Free Local Feature Matching with Transformers : A novel method for local image feature matching. Instead of performing image feature detection, description, and matching sequentially, they propose to first establish pixel-wise dense matches at a coarse level and later refine the good matches at a fine level.
4/26/2021
Efficient transfer learning for NLP with ELECTRA : Can we use ELECTRA to achieve close to SOTA performances for NLP in low-resource settings, in term of compute cost?
4/26/2021
EXPLAINABOARD: An Explainable Leaderboard for NLP : With the rapid development of NLP research, leaderboards have emerged as one tool to track the performance of various systems on various NLP tasks. They are effective in this goal to some extent, but generally present a rather simplistic one-dimensional view of the submitted systems, communicated only through holistic accuracy numbers.
4/26/2021
MS2: Multi-Document Summarization of Medical Studies : To assess the effectiveness of any medical intervention, researchers must conduct a time-intensive and highly manual literature review. NLP systems can help to automate or assist in parts of this expensive process.
4/26/2021
Aligning Latent and Image Spaces to Connect the Unconnectable - A GAN model which can generate infinite images of diverse and complex scenes.
4/26/2021
GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds : An unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft.
4/26/2021
How to Train BERT with an Academic Budget
4/26/2021
Does BERT Pretrained on Clinical Notes Reveal Sensitive Data? : Large Transformers pretrained over clinical notes from Electronic Health Records (EHR) have afforded substantial gains in performance on predictive clinical tasks.
4/26/2021
Don't Neglect the Obvious: On the Role of Unambiguous Words in Word Sense Disambiguation.
-
4/26/2021
Interactive COVID-19 calculator with a classical infectious disease model - SEIR -(Susceptible - Exposed - Infected- Removed)
-
4/26/2021
Implementing Bengio's Neural Probabilistic Language Model (NPLM) using Pytorch : Neural Probabilistic Language Model (NPLM) aims at creating a language model using functionalities and features of artificial neural network.
-