Link

SAIDD - Symposium on AI, Data and Digitalization

Here are our key points from the conference SAIDD 23

AI, App Development

SAID 2023

In recent times, more and more organizations are familiarizing themselves with and implementing Big Data and AI technologies in their digital transformation processes. The most valuable thing about digitalization is that the physical world is transformed by improved productivity, innovation and impact. The opportunities with AI are many, across research, finance and government. Given the significant transformational scope of AI, there exists the potential to have profound impact on the economy, while being able to address regional, national and global challenges. This impact of AI has yet to be realized, and will depend on significant research, development, innovation and commercial efforts. To realize the opportunities of Big Data and AI technologies, it is essential that research and innovations in the field are supported and that it builds on already strong players. New challenges and perspectives require new theory, methodology, good practices, systems, and it should be developed, shared and discussed among several relevant stakeholders.

SAIDD 2023 provided new insights and introduction to the latest trends in AI and data-driven innovation. The symposium was an arena for exchanging knowledge and experiences with other professionals and companies working with AI, data and digitalization in their research and operations. During the two days we heard from renowned national and international experts in the field. Among these were Director of TIB and Head of the Research Group at Leibniz Hannover: Sören Auer, Professor of Engineering and Director of CASOS: Kathleen M. Carley, Senior Jurist for Data Protection for Artificial Intelligence at Mastercard: Jasmien Cèsar, Professor of Responsible AI and Program Director at Umeå University: Virginia Dignum, Professor of Computer Science at Scuola Normale Superiore in Pisa Italy: Fossil Giannotti, as well as Head of Business Development at Silo AI and former Director of the Open AI Lab: Trym Holter.

In this post, we would like to share some of the content we were presented by keynote speakers during SAIDD 2023.

Leveraging Cognitive Knowledge Graphs for Science and Economic Resilience 

Presented by Sören Auer (Director TIB, Head of research group Data Science and Digital Libraries, Leibniz University Hannover).

Knowledge graphs, or knowledge graphs, are a graphical representation of a collection of knowledge and information organized in a way that makes it easy to explore and extract useful insights. The knowledge graph consists of a series of nodes that represent different entities or concepts, as well as “edges” that represent the relationships between the entities. Knowledge graphs are often used in conjunction with AI and machine learning to analyze large amounts of data and extract insights from them. They can also be used in search engines and knowledge-based systems to be able to provide more precise and relevant queries and requests.

Through Sören Auer's lecture, he presented the concept of cognitive knowledge graphs, which use richer atomic base units - graphlets as constituents. This refers to a method of representing graphs by breaking them down into smaller and contiguous sub-graphs. Auer presents the Open Research Knowledge Graph (https://orkg.org/) as an example of the use of cognitive knowledge graphs that intertwine human and machine intelligence, in order to be able to represent research contributions in a way that provides coherence to the research data. Each research contribution is given a structured and standardized description in order to be able to compare and link different research data across fields.

As a result, ORKG provides a digital infrastructure that enables researchers and other stakeholders to gain a better overview of millions of publications and scholarly content.

Social Cybersecurity: Synthesizing Social Science Theory, AI & Network Science to Support Social Engagement

Presented by Kathleen M. Carley (Professor and Public Policy Department, Computer Science Department, and Social and Decision Sciences Department, Carnegie Mellon University. Director of the Center for Computational Analysis and Organizational Systems CASOS).

Social cybersecurity is a new interdisciplinary social science that aims to characterize, understand, and predict cyber meditated changes in human behavior, and related social, cultural, and political outcomes. The term cybermeditated refers to something that occurs or takes place through or through the use of computer networks, the internet, or digital technologies. Through Kathleen Carley's lectures, she sets out various key challenges, here she discusses various influence activities, disinformation, bots and “hate-speech”, among others. She pointed to several empirical results from the application of new social cyber technologies to areas such as the “COVID-19 response”, the reopening of America, the elections in the Philippines, and other world events. For the results, reference is made to the new Behavioural, Economic, Network, Determination (BEND) framework, which collectively is a structured approach to analyse, assess and respond to impact campaigns by assessing behavioural, economic, network and deterrent factors. Carley goes into the analytical process necessary to operationalize this, as well as the role of AI and network science. Key insights are presented regarding the strengths and limitations of using AI for social cybersecurity.

Carley's research combines cognitive science, social networking, and computer science to address complex social and organizational problems. Carley and her research team have developed infrastructure tools to analyze large-scale dynamic networks and various multiagent simulation systems. A multi-agent simulation system is a computer program or model that recreates and simulates interactions between multiple agents in a virtual world or simulated environment. Each agent in the system is an autonomous entity that has the ability to perceive and react to the environment and other agents according to certain rules and objectives. Her presentation concludes with a description of areas where future research will be needed to support social interaction and inhibit harmful online activity.

AI Regulatory Developments: A Deep Dive on the EU AI Act

Presented by Jasmien César (Senior Counsel of Privacy and Data Protection for AI at Mastercard).

In April 2021, the European Commission came up with a bill aimed at regulating the development and use of artificial intelligence in the European Union. The bill contains provisions that will protect human rights and privacy, but at the same time promote the growth and development of artificial intelligence. The bill is part of the EU's wider digital strategy, which ensures technology is used responsibly, but at the same time ensures that Europe will remain competitive within the field. The AI software must be strong and resistant to cyber attacks, and the data used must be of high quality. AI systems used must also be registered in public databases in the EU.

César also presented risk assessments of AI made by the European Union. She divides risk into four: unacceptable risk, high risk, transparency risk and minimal/no risk. Social score scores, the scorecards of individuals' online behaviors and activities, fell below unacceptable risk. This would thus be prohibited under the EU AI Act. Under high risk, she listed biometric identification and the use of AI in hiring processes, among other things. These areas of application will not be prohibited, but strongly regulated by law. Under transparency risk, chatbots and deep fakes fell. Chatbots, which https://openai.com/blog/chatgpt, in connection with the EU AI Act, there will be an obligation to inform the public and users that a bot is being communicated and that AI is being used. Applications defined to minimal/no risk will not be regulated by the EU.

Responsible Artificial Intelligence: An Inclusive Road Ahead

Presented by Virginia Dignum (Wallenberg Chair, Professor Responsible Artificial Intelligence, Program Director WASP-HS, Department of Computing Science, Umeå University).

Artificial Intelligence (AI) has great potential when it comes to adding accuracy, efficiency, cost savings and speed to a whole range of human activities, while providing new insights into behavior and cognition. There are huge positive benefits that will be beneficial in today's society. Nevertheless, many are skeptical about its development and implementation. The way AI is developed and implemented says a lot about how the service will affect our lives and communities. An example is that automated classification systems have on several occasions delivered biased results, therefore questions of privacy and bias are raised. AI's impact concerns not only research and development, but also how these systems are introduced into society and used in everyday situations. There is now a major ongoing debate, which is about how the use of AI will affect work, wellbeing, social interactions, health, income distribution and other social areas. Addressing these challenges requires that ethical, legal, societal and economic implications be taken into account.

In the talk, Virginia Digum discussed how a responsible approach to the development and use of AI can be achieved and how current practices to insure ethical alignment of decisions made or supported by AI systems can benefit from the social perspective embedded in feminist and non-Western philosophies, especially the Ubuntu philosophy. Emphasizing human community, cooperation and ethical values, this philosophy has previously been used as an inspiration for post-apartheid research in South Africa and is today part of the country's national cultural heritage.

Explainable AI (XAI) a Basic Break Towards Synergistic Human-Machine Interaction and Collaboration

Presented by Fosca Gianotti (Professor of Computer Science at Scuola Normale Superiore, Pisa & Associate at the Information Science and Technology Institute “A Faedo” of CNR, Pisa, Italy).

Gianotti presented literature and work within XAI (explanatory AI). She believes that the future of AI lies in enabling the collaboration between man and machine to solve complex problems. To make this happen, clarity, trust and understanding are required. XAI will make decision-making and outcomes understandable to humans, which are crucial in high-risk decision-making. Traditionally, AI uses “black boxes”, which allow one to make a decision, without explanation of the background of the decision. Through XAI, transparency is increased, by letting people know how a decision was made, thus allowing users to evaluate and validate the decision. This can help correct errors that contribute to erroneous decisions. Gianotti presented the “Dr. House method”, which refers to a systematic approach for diagnosing the causes of a decision made by an AI model.

​​The State of AI in the Nordic Region

Presented by Trym Holter (Business Development Executive, Silo AI & Former Director, Norwegian Open AI Lab).

Trym Holter discusses the current situation of AI in the Nordic region. The focus is primarily on the implementation of AI technologies in the private and public sectors. Holter highlights key barriers and opportunities for further development of AI-based products and services. In addition, he highlights the importance of increased investment in research and development of AI. He presented figures that showed that only ¼ of companies will invest more than 20% of their R&D budget in AI. He also clarified that a tendency in Nordic countries is that it is difficult to get past the phase of concept testing, and that one of the biggest barriers is getting enough talent and knowledge.

The values shared between the Nordic countries are often claimed to help make the region one of the most innovative and competitive in the world. Although the Nordic countries are pioneers in AI preparedness, ethics and credibility, the region is still at an early stage when it comes to implementing AI technologies in products and services. Clearer strategies and stronger cooperation are regularly called for. The Nordic authorities have varying investments and strategies to accelerate the use of AI.

Thank you to the Norwegian Research Council and Vestlandsforskning for the invitation to participate in the conference. If you have any questions or want to hear more, please contact liaise with us.

Andre artikler

Vi ser frem til å høre fra deg, ta kontakt.

Coockiebanner