Weekly Arxiv Summary - Written by AI (June 12, 2023)
As the field of artificial intelligence and machine learning continues to advance, researchers are developing new algorithms and models that are revolutionizing our understanding of these technologies. In the last week alone, 464 papers were published on these topics, but a few stood out as particularly noteworthy.
One of the most interesting papers published was 1, which examined Circumscription, a key approach for defining non-monotonic description logics (DLs). The authors managed to establish the decidability of conjunctive query (CQ) evaluation on the circumscribed DL knowledge bases (KBs). Additionally, the paper explored the complexity, data complexity, and atomic queries (AQs) of DLs from ALCHIO to DL-Lite, providing a detailed understanding of the benefits and limitations of the circumscription approach.
Another paper worth mentioning is 2, which proposed the Graph Convolutional Transformer (GCT-TTE) model for the problem of travel time estimation. This model incorporated a variety of data modalities to capture distinct features of a route and was designed to outperform existing path-aware and path-blind models. The authors conducted comprehensive experiments on two datasets and achieved outstanding results, further demonstrating the effectiveness of their proposed model. Moreover, GCT-TTE was deployed as a web service so that further experiments with user-defined routes could be conducted.
In the field of natural language processing, Junxian Zhou et al. presented a unified one-step solution for aspect sentiment quad prediction (ASQP) in 3. To expand the capacity of existing ASQP datasets, the authors proposed two new datasets with larger sizes, more words per sample, and higher density. They also developed the One-ASQP model that uses a sentiment-specific horns tagging schema and the “[NULL]” token to solve the two subtasks of ASQP independently and simultaneously. Experiments on two benchmark datasets and the newly released datasets demonstrated the effectiveness of the proposed model. The two new datasets are available at the given link.
Another noteworthy paper in the field of chemistry is 4, which presented a new retrosynthesis prediction method, RetroKNN, with local template retrieval. The authors created atom-template and bond-template stores for the training data, which were then used to retrieve local templates with a k-nearest-neighbor (KNN) search during inference. The retrieved templates were combined with neural network predictions as the final output, and a lightweight adapter was used to adjust the weights when combining the outputs. Using two widely used benchmarks, RetroKNN demonstrated a 7.1% improvement in top-1 accuracy on the USPTO-50K dataset and a 12.0% improvement in top-1 accuracy on the USPTO-MIT dataset. These results demonstrate the effectiveness of RetroKNN in boosting the performance of template-based systems.
Lastly, a paper worth mentioning in the field of language models is 5, which presented an approach to certified reasoning with language models. The authors proposed a tool called guides that use state and incremental constraints to guide the generation. This tool, named LogicGuide, can guarantee that the model’s steps are sound while still accounting for its assumptions. Experiments on PrOntoQA and ProofWriter reasoning datasets showed that LogicGuide improves the performance of GPT-3, GPT-3.5 Turbo, and LLaMA (accuracy gains of up to 35%). Furthermore, it was successful in avoiding content effects and allowed LLaMA to self-improve by learning from certified self-generated reasoning.
In conclusion, these papers represent the cutting-edge research being done in the fields of AI and machine learning. Each of them contributes to our understanding of these technologies, and they will undoubtedly inspire new advances and innovations in the field.