modest natural-language processing
27 次浏览
modest natural language processing
npm install compromise

compromise tries its best.

Welcome to v12! - Release Notes here 👍


compromise makes it simple to interpret and match text:

let doc = nlp(entireNovel)

doc.if('the #Adjective of times').text()
// "it was the blurst of times??"
if (doc.has('simon says #Verb')) {
  return true


conjugate and negate verbs in any tense:

let doc = nlp('she sells seashells by the seashore.')
// 'she sold seashells by the seashore.'


transform nouns to plural and possessive forms:

let doc = nlp('the purple dinosaur')
// 'the purple dinosaurs'


interpret plaintext numbers


let doc = nlp('ninety five thousand and fifty two')
// 'ninety five thousand and fifty four'


grab subjects in a text:

let doc = nlp(buddyHolly)
// [{text:'Mary Tyler Moore'}]

let doc = nlp(freshPrince)
// 'West Phillidelphia'

doc = nlp('the opera about richard nixon visiting china')
// [
//   { text: 'richard nixon' },
//   { text: 'china' }
// ]


work with contracted and implicit words:

let doc = nlp("we're not gonna take it, no we ain't gonna take it.")

// match an implicit term
doc.has('going') // true

// transform
// 'we are not going to take it, no we are not going to take it.'

Use it on the client-side:

<script src="https://unpkg.com/compromise"></script>
<script src="https://unpkg.com/compromise-numbers"></script>

  var doc = nlp('two bottles of beer')
  document.body.innerHTML = doc.text()
  // 'one bottle of beer'

or as an es-module:

import nlp from 'compromise'

var doc = nlp('London is calling')
// 'London is not calling'

or if you don't care about POS-tagging, you can use the tokenize-only build: (90kb!)

<script src="https://unpkg.com/compromise/builds/compromise-tokenize.js"></script>
  var doc = nlp('No, my son is also named Bort.')
  //you can see the text has no tags
  console.log(doc.has('#Noun')) //false
  //but the whole api still works
  console.log(doc.has('my .* is .? named /^b[oa]rt/')) //true

compromise is 170kb (minified):

it's pretty fast. It can run on keypress:

it works mainly by conjugating many forms of a basic word list.

The final lexicon is ~14,000 words:

you can read more about how it works, here.


set a custom interpretation of your own words:

let myWords = {
  kermit: 'FirstName',
  fozzie: 'FirstName',
let doc = nlp(muppetText, myWords)

or make more changes with a compromise-plugin.

const nlp = require('compromise')

nlp.extend((Doc, world) => {
  // add new tags
    Character: {
      isA: 'Person',
      notA: 'Adjective',

  // add or change words in the lexicon
    kermit: 'Character',
    gonzo: 'Character',

  // add methods to run after the tagger
  world.postProcess(doc => {
    doc.match('light the lights').tag('#Verb . #Plural')

  // add a whole new method
  Doc.prototype.kermitVoice = function() {
    this.match('i [(am|was)]').prepend('um,')
    return this



(these methods are on the nlp object)

  • .tokenize() - parse text without running POS-tagging
  • .extend() - mix in a compromise-plugin
  • .fromJSON() - load a compromise object from .json() result
  • .verbose() - log our decision-making for debugging
  • .version() - current semver version of the library
  • .world() - grab all current linguistic data
  • .all() - return the whole original document ('zoom out')
  • .found [getter] - is this document empty?
  • .parent() - return the previous result
  • .parents() - return all of the previous results
  • .tagger() - (re-)run the part-of-speech tagger on this document
  • .wordCount() - count the # of terms in the document
  • .length [getter] - count the # of characters in the document (string length)
  • .clone() - deep-copy the document, so that no references remain
  • .cache({}) - freeze the current state of the document, for speed-purposes
  • .uncache() - un-freezes the current state of the document, so it may be transformed

(all match methods use the match-syntax.)

  • .match('') - return a new Doc, with this one as a parent
  • .not('') - return all results except for this
  • .matchOne('') - return only the first match
  • .if('') - return each current phrase, only if it contains this match ('only')
  • .ifNo('') - Filter-out any current phrases that have this match ('notIf')
  • .has('') - Return a boolean if this match exists
  • .lookBehind('') - search through earlier terms, in the sentence
  • .lookAhead('') - search through following terms, in the sentence
  • .before('') - return all terms before a match, in each phrase
  • .after('') - return all terms after a match, in each phrase
  • .lookup([]) - quick find for an array of string matches
  • .pre('') - add this punctuation or whitespace before each match
  • .post('') - add this punctuation or whitespace after each match
  • .trim() - remove start and end whitespace
  • .hyphenate() - connect words with hyphen, and remove whitespace
  • .dehyphenate() - remove hyphens between words, and set whitespace
  • .toQuotations() - add quotation marks around these matches
  • .toParentheses() - add brackets around these matches
  • .tag('') - Give all terms the given tag
  • .tagSafe('') - Only apply tag to terms if it is consistent with current tags
  • .unTag('') - Remove this term from the given terms
  • .canBe('') - return only the terms that can be this tag
  • .map(fn) - run each phrase through a function, and create a new document
  • .forEach(fn) - run a function on each phrase, as an individual document
  • .filter(fn) - return only the phrases that return true
  • .find(fn) - return a document with only the first phrase that matches
  • .some(fn) - return true or false if there is one matching phrase
  • .random(fn) - sample a subset of the results


These are some helpful extensions:


npm install compromise-adjectives


npm install compromise-dates


npm install compromise-numbers


npm install compromise-export

  • .export() - store a parsed document for later use
  • nlp.load() - re-generate a Doc object from .export() results

npm install compromise-html

  • .html({}) - generate sanitized html from the document

npm install compromise-hash

  • .hash() - generate an md5 hash from the document+tags
  • .isEqual(doc) - compare the hash of two documents for semantic-equality

npm install compromise-keypress


npm install compromise-ngrams


npm install compromise-paragraphs this plugin creates a wrapper around the default sentence objects.


npm install compromise-sentences


npm install compromise-syllables

  • .syllables() - split each term by its typical pronounciation


Typescript support is still a work in progress. So far support for plugins has been mostly complete, and can be used to type-safely extend NLP.

import nlp from 'compromise'
import ngrams from 'compromise-ngrams'
import numbers from 'compromise-numbers'

// .extend() can be chained
const nlpEx = nlp.extend(ngrams).extend(numbers)

nlpEx('This is type safe!').ngrams({ min: 1 })
nlpEx('This is type safe!').numbers()

Type-safe Plugins

The .extend() function returns an nlp type with updated Document and World types (Phrase, Term and Pool are not currently supported). While the global nlp also recieves the plugin from a runtime perspective; it's type will not be updated - this is a limitation of Typescript.

Typesafe plugins can be created by using the nlp.Plugin type:

interface myExtendedDoc {
  sayHello(): string

interface myExtendedWorld {
  hello: string

const myPlugin: nlp.Plugin<myExtendedDoc, myExtendedWorld> = (Doc, world) => {
  world.hello = 'Hello world!'

  Doc.prototype.sayHello = () => world.hello

const _nlp = nlp.extend(myPlugin)
const doc = _nlp('This is safe!')
doc.world.hello = 'Hello again!'

Known Issues

  • compromise_1.default is not a function - This is a problem with your tsconfig.json it can be solved by adding "esModuleInterop": true. Make sure to run tsc --init when starting a new Typescript project.


3rd party:
Some fun Applications:


  • slash-support: We currently split slashes up as different words, like we do for hyphens. so things like this don't work: nlp('the koala eats/shoots/leaves').has('koala leaves') //false

  • inter-sentence match: By default, sentences are the top-level abstraction. Inter-sentence, or multi-sentence matches aren't supported: nlp("that's it. Back to Winnipeg!").has('it back')//false

  • nested match syntax: the danger beauty of regex is that you can recurse indefinitely. Our match syntax is much weaker. Things like this are not (yet) possible: doc.match('(modern (major|minor))? general') complex matches must be achieved with successive .match() statements.

  • dependency parsing: Proper sentence transformation requires understanding the syntax tree of a sentence, which we don't currently do. We should! Help wanted with this.


    ☂️ Isn't javascript too...

      yeah it is!
      it wasn't built to compete with NLTK, and may not fit every project.
      string processing is synchronous too, and parallelizing node processes is weird.
      See here for information about speed & performance, and here for project motivations

    💃 Can it run on my arduino-watch?

      Only if it's water-proof!
      Read quick start for running compromise in workers, mobile apps, and all sorts of funny environments.

    🌎 Compromise in other Languages?

      we've got work-in-progress forks for German and French, in the same philosophy.
      and need some help.

    Partial builds?

      we do offer a [compromise-tokenize](./builds/compromise-tokenize.js) build, which has the POS-tagger pulled-out.
      but otherwise, compromise isn't easily tree-shaken.
      the tagging methods are competitive, and greedy, so it's not recommended to pull things out.
      Note that without a full POS-tagging, the contraction-parser won't work perfectly. ((spencer's cool) vs. (spencer's house))
      It's recommended to run the library fully.

See Also:

  •   naturalNode - fancier statistical nlp in javascript
  •   superScript - clever conversation engine in js
  •   nodeBox linguistics - conjugation, inflection in javascript
  •   reText - very impressive text utilities in javascript
  •   jsPos - javascript build of the time-tested Brill-tagger
  •   spaCy - speedy, multilingual tagger in C/python
  •   Prose - quick tagger in Go by Joseph Kato


  • 最近提交:05-29
  • 创建时间:2011-07-05