Skip to main content

SIGAI Workshop on Emerging Research Trends

SIGAI Workshop on Emerging Research Trends

in Artificial Intelligence (ERTAI - 2010)

17th April, 2010, C-DAC, Navi Mumbai, India

Supported by Computer Society of India (CSI)


Keynote Address by Prof. Rajeev Sangal, IIIT Hyderabad

Two-stage Constraint Parsing for Indian Languages
Natural Language Processing deals with understanding and developing computational theories of human language. Such theories allow us to understand the structure of language and build computer software that can process language.  For example, if a query in a human language can be processed (that is, analyzed and understood) by the machine, then it can try to find an answer from a given database or from a set of documents. A search engine of the future is likely to use such a technology.

Parsing gives the grammatical analysis of a given sentence.  Here, we will describe 2-stage parsing in the Computational Paninian Grammar framework. The parser is a constraint solver, where constraints are expressed in the form of integer programming constraints. Research results regarding its performance would be presented, and compared with data driven parsing.

Invited Talk by Dr. R Uthurusamy, General Motors

AI Research Trends and Resources: A Personal View
A personal view of current AI Research Trends and Resources will be presented in three parts. First part will outline available resources for AI researchers and practitioners and resources on advice for beginning graduates on doing research. Second part consists of short videos of a select set of AI and other innovative research projects.  The concluding part will present a few actionable suggestions to assist those seeking interesting AI research areas and innovative applications.

Invited Talk by Dr. Hiranmay Ghosh, TCS Innovation Labs, Delhi

Semantic Multimedia Web
The vision of semantic web proposes an environment where the data and services on the web can be semantically interpreted and processed by machines to facilitate human consumption. In today's cyberspace, audio-visual artifacts compete with traditional text and data in their information content.  Machine interpretation of multimedia data is therefore essential for realization of the semantic web vision. Semantic web technology relies on ontology as a tool for modeling an abstract view of the real world and contextual semantic analysis of documents. Ontology languages like Web Ontology Language (OWL) uses linguistic constructs for modeling the real-world and can be conveniently used for interpreting textual documents. An attempt to use ontology for interpreting multimedia contents is hindered by the semantic gap that exists between media features appearing in the documents and the linguistic structures representing the concepts in the ontology. We argue that the concepts have their roots in perceptual experience of human beings and the apparent disconnect between the conceptual and the perceptual worlds is rather artificial. The key to semantic processing of media data lays in harmonizing the seemingly isolated conceptual and the perceptual worlds. In this context, we propose a new ontology based approach for contextual semantic interpretation of multimedia data and services on the web. This ontology representation “Multimedia Web Ontology Language (MOWL)” is an extension of OWL and supports perceptual modeling and reasoning essential for semantic multimedia applications.
And Research paper presentations along with Open discussion on AI Research Trends, Challenges & Methodologies

For more information please visit: http://sigai.cdacmumbai.in/index.php/ertai-2010

Comments

Popular posts from this blog

आपण महाराष्ट्राच्या संस्कृतीचे अजूनही खरंच पाईक आहोत कि भरकटलोत ?

महाराष्ट्र दिनाच्या सगळ्यांना शुभेच्छा. तसं ज्याला आज आपण महाराष्ट्र म्हणतो, त्याचा हा आधुनिक जन्म दिवस. महाराष्ट्र कधी पासून अस्तित्वात असेल? म्हणजे इथली संस्कृती, आपण जे वागतो, जगतो, बोलतो, राहतो वगैरे वगैरे. कधी पासून हे सगळं असं असावं? याचं उत्तर आपल्या सारख्या सामान्यांपेक्षा हा ज्या कोणत्या विषयाचा विषय असेल त्या विषयाच्या निष्णातांना जास्त चांगलं माहित असेल. तरी, आपला एक सामान्य माणूस म्हणून या भूमीवर अधिकार आहे आणि त्याच अधिकाराने आपण आपला एक अंदाज लावू शकतो. म्हणजे ज्ञानेश्वर-नामदेव इथे आपल्याला या मराठी राज्याची - महाराष्ट्राची - सुरुवात झाली असावी, असा अंदाज लावता येईल.म्हणजे आजच्या आपल्या मराठी म्हणता येईल अशा संस्कृतीची सुरवात तिथून झाली असं आपण समजू शकतो. किंवा मला जे मांडायचं त्या साठी ते सोयीचं आहे म्हणून समजा हवं तर! पण मीच कशाला वारकरी साहित्यातच संत बहिणाबाईंनी लिहून ठेवलयं - संतकृपा झाली । इमारत फळा आली ॥ १ ॥ ज्ञानदेवें रचिला पाया । उभारिलें देवालया ॥ २ ॥ नामा तयाचा किंकर । तेणें रचिलें तें आवार ॥ ३ ॥ जनार्दन एकनाथ । खांब दिधला भागवत ॥ ४ ॥ तुका झालासे कळस । भजन करा ...

Is Higher Education a Bubble? A view ...

Peter Thiel founder PayPal and now a Venture Capitalist puts his thought on the education system. Looks his thoughts are biased to his  experiences, but this view towards education is not completely wrong when the education system doesn't produce the quality resources. The situation in the India is similar and may be the bauble will burst one day. Need to be careful. Education is a bubble in a classic sense. To call something a bubble, it must be overpriced and there must be an intense belief in it. Housing was a classic bubble, as were tech stocks in the ’90s, because they were both very overvalued, but there was an incredibly widespread belief that almost could not be questioned — you had to own a house in 2005, and you had to be in an equity-market index fund in 1999. Probably the only candidate left for a bubble — at least in the developed world (maybe emerging markets are a bubble) — is education. It’s basically extremely overpriced. People ar...

NVDIA AI India

 NVDIA AI some Insights after digesting the event with some Diwali Delicacies! I had good time at the NVIDIA AI Summit held at the Jio World Center in Mumbai. It felt like every company working on artificial intelligence in India was present, either as an exhibitor or an attendee. Some sessions were so packed that people were standing, and even then, many more were trying to get in. Much of the discussion revolved around Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). These are undoubtedly central topics in AI today, drawing significant attention. I’m a huge admirer of RAG, but I still wouldn’t go as far as to say that “LLM (+RAG) is AI.” Although no one at the conference explicitly said this, it felt implied over the three days of sessions. I may be wrong, but I sensed a push for these technologies. Not just from NVIDIA, but from any hardware supplier, there’s an incentive to promote anything that drives demand for their solutions. NVIDIA’s GPUs are a backbo...