Abstract
Formal logic expressions are commonly written in standardized mathematical notation. Learning this notation typically requires many years of experience and is not an explicit part of undergraduate academic curricula. Constructing and comprehending logical predicates can feel difficult and unintuitive. We hypothesized that this process can be automated using neural machine translation. Most machine translation techniques involve word-based segmentation as a preprocessing step. Given the nature of our custom dataset, hosts first-order-logic (FOL) semantics primarily in unigram tokens, the word-based approach does not seem applicable. The proposed solution was to automate the translation of short English sentences into FOL expressions using character-level prediction in a recurrent neural network model. We trained four encoder-decoder models (LSTM, Bidirectional GRU with Attention, and two variants of Bi-directional LSTM with Attention). Our experimental results showed that several established neural translation techniques can be implemented to produce highly accurate machine translators of English sentences to FOL formalisms, given only characters as markers of semantics. We also demonstrated that attention-based enhancement to the encoder-decoder architecture can vastly improve translation accuracy. Most machine translation techniques involve word-based segmentation as a preprocessing step. Given the nature of our custom dataset, hosts first-order-logic (FOL) semantics primarily in unigram tokens, the word-based approach does not seem applicable. The proposed solution was to automate the translation of short English sentences into FOL expressions using character-level prediction in a recurrent neural network model. We trained four encoder-decoder models (LSTM, Bidirectional GRU with Attention, and two variants of Bi-directional LSTM with Attention). Our experimental results showed that several established neural translation techniques can be implemented to produce highly accurate machine translators of English sentences to FOL formalisms, given only characters as markers of semantics. We also demonstrated that attention-based enhancement to the encoder-decoder architecture can vastly improve translation accuracy. We trained four encoder-decoder models (LSTM, Bidirectional GRU with Attention, and two variants of Bi-directional LSTM with Attention). Our experimental results showed that several established neural translation techniques can be implemented to produce highly accurate machine translators of English sentences to FOL formalisms, given only characters as markers of semantics. We also demonstrated that attention-based enhancement to the encoder-decoder architecture can vastly improve translation accuracy.
| Original language | English |
|---|---|
| Title of host publication | SoutheastCon 2021 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 465-472 |
| Number of pages | 8 |
| ISBN (Electronic) | 9780738111315 |
| ISBN (Print) | 9780738111315 |
| DOIs | |
| State | Published - Mar 10 2021 |
| Event | 2021 SoutheastCon, SoutheastCon 2021 - Atlanta, United States Duration: Mar 10 2021 → Mar 13 2021 |
Publication series
| Name | SoutheastCon 2021 |
|---|
Conference
| Conference | 2021 SoutheastCon, SoutheastCon 2021 |
|---|---|
| Country/Territory | United States |
| City | Atlanta |
| Period | 3/10/21 → 3/13/21 |
Bibliographical note
Publisher Copyright:© 2021 IEEE.
ASJC Scopus Subject Areas
- Computer Networks and Communications
- Software
- Electrical and Electronic Engineering
- Control and Systems Engineering
- Signal Processing
Keywords
- Machine learning
- Neural machine translation
- NLP
- Predicate logic
Fingerprint
Dive into the research topics of 'Generating Predicate Logic Expressions From Natural Language'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS