Ticker

6/recent/ticker-posts

what is natural language processing in computer.

Natural Language Processing  ( NLP ).


Natural Language Processing (NLP) Natural LanguageProcessing (NLP) is a branch of Artificial Intelligence (Al) that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.



 While natural language processing is not a new science, the technology is rapidly advancing thanks to an increased interest in human-to-machine communications, plus an availability of big data, powerful computing and enhanced algorithms. As a human, you may speak and write in English, Spanish or Chinese. But a computer's native language, known as machine code or machine language is largely incomprehensible to most people. At your device's lowest levels, communication occurs not with words but through millions of zeros and ones that produce logical actions.



 Basic NLP tasks include ionization and parsing, lemmatization /stemming, part-of-speech tagging, language detection and identification of semantic relationships. In general terms, NLP tasks break down language into shorter, elemental pieces, try to understand relationships between the pieces and explore how the pieces work together to create meaning.


 These underlying tasks are often used in higher-level NLP capabilities, such as:

 Content categorization: A linguistic-based document summary, including search and indexing, content alerts and duplication detection.

 Topic discovery and modeling: Accurately capture the meaning and themes in text collections, and apply advanced analytics to text, like optimization and forecasting.

Contextual extraction: Automatically pull structured information from text-based sources.

Speech-to-text and text-to-speech conversion: Transforming voice commands into written text, and vice versa.

 Document summarization: Automatically generating synopses of large bodies of text.

 Machine translation: Automatic translation of text or speech from one language to another.



 History of Natural Language Processing.

 The history of natural language processing generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation woukd be a solved problem. However, Teal progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s, when the first statistical machine translation systems were developed.


 Up to the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. Many of the notable early successes occurred in the field of machine translation, due especially to work at IBM Research, where successively more complicated statistical models were developed. 


These systems were able to take advantage of existing multilingual textual corpora that had been produced by the Parliament of Canada and the European Union as a result of laws calling for the translation of all governmental proceedings into all official languages of the corresponding systems of government.



 However, most other systems depended on corpora specifically developed for the tasks implemented by these systems, which was (and often continues to be) a major limitation in the success of these systems. As a result, a great deal of research has gone into methods of more effectively learning from limited amounts of data. Recent research has increasingly focused on unsupervised and semi-supervised learning algorithms. Such algorithms are able to leam from data that has not been hand-annotated with the desired answers, or using a combination of annotated and non-annotated data.


 Generally, this task is much more difficult than supervised learning, and typically produces less accurate results for a given amount of input data. However, there is an enormous amount of non-annotated data available (including, among other things, the entire content of the World Wide Web), which can often make up for the inferior results if the algorithm used has a low enough time complexity to be practical. In the 2010s, representation learning and deep neural network-style machine leaming methods became widespread in natural language processing, due in part toa flury of results showing that such techniques can achieve state-of-the-art results in many natural language tasks.



 for example, in language modeling, parsing and many others. Popular techniques include the use of word embedings to capture semantic properties of words, and an increase in end-to-end learning of a higher-level task (e.g., question answering) instead of relying on a pipeline of separate intermediate tasks (e.g., part-of-speech tagging and dependency parsing).


 In some areas, this shift has entailed substantial changes in how NLP systems are designed, such that deep neural network-based approaches may be viewed as a new paradigm distinct from statistical natural language processing.


 For instance, the term neural machine translation (NMT) emphasizes the fact that deep learning-based approaches to machine translation directly learm sequence- to-sequence transformations, obviating the need for intermediate steps such as word alignment and language modeling that were used in statistical machine translation (SMT).



Advantages of Natural Language Processing.

1. Users can ask questions about any subject and get a direct response within seconds.

 2. NLP system provides answers to the questions in natural language.

 3. The accuracy of the answers increases with the amount of relevant information provided in the question.

4. NLP process helps computers communicate with humans in their language and scales other language-related tasks.

 5. It allows you to perform more language-based data compares to a human being without fatigue and in an unbiased and consistent way. It structures a highly unstructured datasource.



Disadvantages of Natural Language Processing.

1. This is a Complex Query Language i.e. the system may not be able to provide the correct answer of the question that is poorly worded or ambiguous.

 2. The system is built for a single and specific task only; it is unable to adapt to new domains and problems because of limited functions.

3. NLP system doesn't have a user interface which lacks features that allow users to further interact with the system

Post a Comment

0 Comments