posted by user: bcopley || 437 views || tracked by 1 users: [display]

OASIS 4 2025 : Ontology As Structured by the Interfaces with Semantics 4

FacebookTwitterLinkedInGoogle

Link: https://oasis-4.webflow.io/cfp
 
When Jan 15, 2025 - Jan 17, 2025
Where York, UK
Submission Deadline Nov 25, 2024
Categories    linguistics   LLM   philosophy   psychology
 

Call For Papers

OASIS 4 (Ontology As Structured by the Interfaces with Semantics 4) will take place at the University of York in the UK, January 15-17, 2025.

The OASIS conference series aims to promote conversation across different disciplines that interface with semantics, using ontological questions as shared reference points. The broad questions in the background are these:

What basic ontological building blocks do we use to talk and think about the world?
How do these building blocks get combined?
How do grammatical and cognitive phenomena motivate the answers to the first two questions?

For more information, see the OASIS credo.


Invited speakers

David Adger, Queen Mary University of London

Michelle Sheehan, Newcastle University

Phillip Wolff, Emory University


Special session: Large Language Models and ontological tasks

The most advanced large language models (LLMs) sometimes perform well on making inferences about relationships between entities in the world. For example, Bubeck et al. 2023 asked GPT-4 to draw a unicorn using Tikz, a markup language for generating graphics, with some success: it put the visual elements of the unicorn in roughly the right places. On the other hand, LLMs usually perform poorly on tasks that require them to make inferences about relationships between entities in the world, and may require additional non-linguistically-supplied information about the structure of the world to perform well (Wong et al. 2023, Mahowald et al. 2024).

A question arises: To the extent that LLMs perform well on these ontological tasks without additional information, are they generating non-linguistic models of the world to do so, with their own ontologies? We can wonder further: What is the inventory of ontological reasoning tasks that current LLMs can succeed on? What do they have to "know" and not "know" to have the performance that they have? And what, if anything, does the performance of LLMs on ontological tasks have to do with natural language ontology (either lexical, formal/syntactic/grammatical, or both) as it is situated in the brain and is related by humans to the world?

In this special session we are interested in abstracts aimed at an interdisciplinary audience, reflecting any of the following:

- research that uses LLMs to identify implicit ontologies of the physical and human world, including spatial configurations, force dynamics, human behavior, and causal relevance

- research that clarifies what common-sense or technical notions of "language" and "thought"/"cognition" have to with LLM behavior with respect to world-modeling

- research that relates LLM-identified ontologies or LLM performance on ontological tasks to ontologies in the brain as understood using methods of psychology, psycholinguistics, formal linguistics, philosophy, and neuroscience

- research on how and why giving LLMs certain kinds of information (e.g., system prompting, use of other modalities, key information about language structure or world structure) does or does not affect their performance on ontological tasks

- research analyzing how the abilities or limitations of LLMs in ontological tasks are related to their formal properties

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., Lundberg, S. and Nori, H., 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.

Mahowald, K., Ivanova, A.A., Blank, I.A., Kanwisher, N., Tenenbaum, J.B. and Fedorenko, E., 2024. Dissociating language and thought in large language models. Trends in Cognitive Sciences.

Wong, L., Grand, G., Lew, A.K., Goodman, N.D., Mansinghka, V.K., Andreas, J. and Tenenbaum, J.B., 2023. From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672.


Abstract submission

Abstracts must be anonymous, in pdf format, 2 A4 pages, in a font size no less than 12pt. You may submit at most two abstracts but can be single author on only one. Send your abstract in pdf form to oasisyork4@gmail.com.

Linguists and others submitting very technical research: It is absolutely necessary that you do what you can to make your abstract accessible to an interdisciplinary audience. This doesn't mean eschewing all formalism, but do pitch your abstract so that a non-linguist reader can get something interesting out of it. What it lacks in nuance, it will make up for in power.


Important dates

November 25, 2024: Abstract deadline
Early December, 2024: Notification

Related Resources

DEPLING 2023   International Conference on Dependency Linguistics
IUI 2024   ACM Conference on Intelligent User Interfaces
SPT 2024   International Conference on Signal Processing Trends
AMBIENT 2025   The Fifteenth International Conference on Ambient Computing, Applications, Services and Technologies
NLPTA 2024   5th International Conference on NLP Techniques and Applications
SEMAPRO 2025   The Nineteenth International Conference on Advances in Semantic Processing
AISCA 2025   International Conference on Artificial Intelligence, Soft Computing And Applications
INNOV 2025   The Fourteenth International Conference on Communications, Computation, Networks and Technologies
MEMORY 2025   Memory, Forgetting and Creating - 8th International Interdisciplinary Conference (ONLINE)
KEOD 2024   16th International Conference on Knowledge Engineering and Ontology Development