WorldTree Corpus of Explanation Graphs for Elementary Science Concerns

Wesbury Lab Usenet Corpus: anonymized compilation of postings from 47,860 newsgroups that are english-language 2005-2010 (40 GB)

Wesbury Lab Wikipedia Corpus Snapshot of all articles into the part that is english of Wikipedia that has been drawn in April 2010. It had been prepared, as described in more detail below, to get rid of all links and unimportant product (navigation text, etc) The corpus is untagged, raw text. Utilized by Stanford NLP (1.8 GB).

: a corpus of manually-constructed description graphs, explanatory part reviews, and associated semistructured tablestore for many publicly available primary technology exam concerns in the usa (8 MB)

Wikipedia Extraction (WEX): a prepared dump of english language wikipedia (66 GB)

Wikipedia XML Data: complete content of all of the Wikimedia wikis, by means of wikitext supply and metadata embedded in XML. (500 GB)

Yahoo! Responses questions that are comprehensive Responses: Yahoo! Responses corpus as of 10/25/2007. Contains 4,483,032 concerns and their responses. (3.6 GB)

Yahoo! Responses composed of concerns expected in French: Subset associated with the Yahoo! Responses corpus from 2006 to 2015 composed of 1.7 million concerns posed in French, and their matching responses. (3.8 GB)

Yahoo! Responses Manner issues: subset for the Yahoo! Answers corpus from the 10/25/2007 dump, chosen due to their properties that are linguistic. Contains 142,627 concerns and their responses. (104 MB)

jewish russian brides

Yahoo! HTML Forms removed from Publicly Webpages that is available a little test of pages which contain complex HTML kinds, contains 2.67 million complex types. (50+ GB)

Yahoo N-Gram Representations: This dataset contains representations that are n-gram. The info may act as a testbed for question rewriting task, a common issue in IR research along with to term and phrase similarity task, which can be typical in NLP research. (2.6 GB)

Yahoo! N-Grams, version 2.0: n-grams (n = 1 to 5), removed from the corpus of 14.6 million papers (126 million sentences that are unique 3.4 billion running terms) crawled from over 12000 news-oriented web web sites (12 GB)

Yahoo! Re Re Search Logs with Relevance Judgments: Annonymized Yahoo! Re Search Logs with Relevance Judgments (1.3 GB)

Yahoo! Semantically Annotated Snapshot of this English Wikipedia: English Wikipedia dated from 2006-11-04 prepared with an amount of publicly-available NLP tools. 1,490,688 entries. (6 GB)

Yelp: including restaurant positions and 2.2M reviews (on demand)

Youtube: 1.7 million youtube videos information (torrent)

  • Awesome general public datasets/NLP (includes more listings)
  • AWS Public Datasets
  • CrowdFlower: information for all (plenty of small studies they carried out and information acquired by crowdsourcing for a particular task)
  • Kaggle 1, 2 (be sure though that the kaggle competition information may be used outside the competition! )
  • Open Library
  • Quora (primarily annotated corpora)
  • /r/datasets (endless listing of datasets, many is scraped by amateurs though rather than properly documented or licensed)
  • Rs.io (another big list)
  • Stackexchange: Opendata
  • Stanford NLP team (primarily annotated corpora and TreeBanks or real tools that are NLP
  • Yahoo! Webscope (also contains papers which use the info that is supplied)
  • SaudiNewsNet: 31,030 Arabic newsprint articles alongwith metadata, obtained from different online Saudi magazines. (2 MB)
  • Number of Urdu Datasets for POS, NER and NLP tasks.

German Political Speeches Corpus: assortment of current speeches held by top German representatives (25 MB, 11 MTokens)

NEGRA: A Syntactically Annotated Corpus of German Newspaper Texts. Designed for free for many Universities and organizations that are non-profit. Want to signal and deliver type to have. (on demand)

Ten Thousand German News Articles Dataset: 10273 german language news articles categorized into nine classes for subject category. (26.1 MB)

100k German Court choices: Open Legal Data releases a dataset of 100,000 court that is german and 444,000 citations (772 MB)

  • © 2020 GitHub, Inc.
  • Terms
  • Privacy
  • Safety
  • Reputation
  • Assist
  • Contact GitHub
  • Prices
  • API
  • Training
  • We We Blog
  • About

That action can’t be performed by you at this time around.

You finalized in with another tab or screen. Reload to recharge your session. You finalized call at another window or tab. Reload to recharge your session.