Skip to content

DaNLP is a repository for Natural Language Processing resources for the Danish Language.

License

Notifications You must be signed in to change notification settings

alexandrainst/danlp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Documentation Status

DaNLP is a repository for Natural Language Processing resources for the Danish Language. It is a collection of available datasets and models for a variety of NLP tasks. The aim is to make it easier and more applicable to practitioners in the industry to use Danish NLP and hence this project is licensed to allow commercial use. The project features code examples on how to use the datasets and models in popular NLP frameworks such as spaCy, Transformers and Flair as well as Deep Learning frameworks such as PyTorch. See our documentation pages for more details about our models and datasets, and definitions of the modules provided through the DaNLP package.

If you are new to NLP or want to know more about the project in a broader perspective, you can start on our microsite.


Help us improve DaNLP

  • πŸ™‹ Have you tried the DaNLP package? Then we would love to chat with you about your experiences from a company perspective. It will take approx 20-30 minutes and there's no preparation. English/danish as you prefer. Please leave your details here and then we will reach out to arrange a call.

News

  • πŸŽ‰ Version 0.1.2 has been released with
    • 2 new models for hate speech detection (Hatespeech) based on BERT and ELECTRA
    • 1 new model for hate speech classification

Next up

  • new model and data for discourse coherence

Installation

To get started using DaNLP in your python project simply install the pip package. Note that installing the default pip package will not install all NLP libraries because we want you to have the freedom to limit the dependency on what you use. Instead we provide you with an installation option if you want to install all the required dependencies.

Install with pip

To get started using DaNLP simply install the project with pip:

pip install danlp 

Note that the default installation of DaNLP does not install other NLP libraries such as Gensim, SpaCy, flair or Transformers. This allows the installation to be as minimal as possible and let the user choose to e.g. load word embeddings with either spaCy, flair or Gensim. Therefore, depending on the function you need to use, you should install one or several of the following: pip install flair, pip install spacy or/and pip install gensim .

Alternatively if you want to install all the required dependencies including the packages mentionned above, you can do:

pip install danlp[all]

You can check the requirements.txt file to see what version the packages has been tested with.

Install from source

If you want to be able to use the latest developments before they are released in a new pip package, or you want to modify the code yourself, then clone this repo and install from source.

git clone https://github.com/alexandrainst/danlp.git
cd danlp
# minimum installation
pip install .
# or install all the packages
pip install .[all]

To install the dependencies used in the package with the tested versions:

pip install -r requirements.txt

Install from github

Alternatively you can install the latest version from github using:

pip install git+https://github.com/alexandrainst/danlp.git

Install with Docker

To quickly get started with DaNLP and to try out the models you can use our Docker image. To start a ipython session simply run:

docker run -it --rm alexandrainst/danlp ipython

If you want to run a <script.py> in your current working directory you can run:

docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app alexandrainst/danlp python <script.py>
                  

Quick Start

Read more in our documentation pages.

NLP Models

Natural Language Processing is an active area of research and it consists of many different tasks. The DaNLP repository provides an overview of Danish models for some of the most common NLP tasks (and is continuously evolving).

Here is the list of NLP tasks we currently cover in the repository.

You can also find some of our transformers models on HuggingFace.

If you are interested in Danish support for any specific NLP task you are welcome to get in contact with us.

We also recommend to check out the list of Danish NLP corpora/tools/models maintained by Finn Γ…rup Nielsen (Warning: not all items are available for commercial use, check the licence).

Datasets

The number of datasets in the Danish language is limited. The DaNLP repository provides an overview of the available Danish datasets that can be used for commercial purposes.

The DaNLP package allows you to download and preprocess datasets.

Examples

You will find examples that shows how to use NLP in Danish (using our models or others) in our benchmark scripts and jupyter notebook tutorials.

This project keeps a Danish written blog on medium where we write about Danish NLP, and in time we will also provide some real cases of how NLP is applied in Danish companies.

Structure of the repo

To help you navigate we provide you with an overview of the structure in the github:

.
β”œβ”€β”€ danlp		   			# Source files
β”‚	β”œβ”€β”€ datasets   			# Code to load datasets with different frameworks 
β”‚	└── models     			# Code to load models with different frameworks 
β”œβ”€β”€ docker         			# Docker image
β”œβ”€β”€ docs	       			# Documentation and files for setting up Read The Docs
β”‚   β”œβ”€β”€ docs	   			# Documentation for tasks, datasets and frameworks
β”‚	    β”œβ”€β”€ tasks  			# Documentation for nlp tasks with benchmark results
β”‚	    β”œβ”€β”€ frameworks 		# Overview over different frameworks used
β”‚		β”œβ”€β”€ gettingstarted 	  # Guides for installation and getting started  
β”‚	    └── imgs   			 # Images used in documentation
β”‚   └── library     		# Files used for Read the Docs
β”œβ”€β”€ examples	   			# Examples, tutorials and benchmark scripts
β”‚   β”œβ”€β”€ benchmarks 			# Scripts for reproducing benchmarks results
β”‚   └── tutorials 			# Jupyter notebook tutorials
└── tests   	   			# Tests for continuous integration with Travis

How do I contribute?

If you want to contribute to the DaNLP repository and make it better, your help is very welcome. You can contribute to the project in many ways:

  • Help us write good tutorials on Danish NLP use-cases
  • Contribute with your own pretrained NLP models or datasets in Danish (see our contributing guidelines for more details on how to contribute to this repository)
  • Create GitHub issues with questions and bug reports
  • Notify us of other Danish NLP resources or tell us about any good ideas that you have for improving the project through the Discussions section of this repository.

Who is behind?

The DaNLP repository is maintained by the Alexandra Institute which is a Danish non-profit company with a mission to create value, growth and welfare in society. The Alexandra Institute is a member of GTS, a network of independent Danish research and technology organisations.

Between 2019 and 2020, the work on this repository was part of the Dansk For Alle performance contract (RK) allocated to the Alexandra Institute by the Danish Ministry of Higher Education and Science. Since 2021, the project is funded through the Dansk NLP activity plan which is part of the Digital sikkerhed, tillid og dataetik performance contract.

An overview of the project can be found on our microsite.

Cite

If you want to cite this project, please use the following BibTeX entry:

@inproceedings{danlp2021,
    title = {{DaNLP}: An open-source toolkit for Danish Natural Language Processing},
    author = {Brogaard Pauli, Amalie  and
      Barrett, Maria  and
      Lacroix, OphΓ©lie  and
      Hvingelby, Rasmus},
    booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021)},
    month = june,
    year = "2021"
}

Read the paper here.

See our documentation pages for references to specific models or datasets.