Kursthemen

  • Course Information

      • The "Lecture Series Artificial Intelligence" hosts lecturers from different scientific disciplines and backgrounds.
      • Some dates for lectures have been fixed already. Some further time slots may still be filled during the semester.
      • Talks always start on Tuesday at 14:00 h in Lecture Hall 1 (except explicitly announced otherwise). Administrative questions, however, may be discussed before or after the talks.
      • Make sure to have a look at the talk announcements below on every Monday afternoon to see whether there is a talk on the following day or not.
      • There is no requirement of physical presence. Please use the shared file "Lecture Hall Occupancy" below if you would like to come to (some of) the talks.
      • It is planned to have live streams of all lectures to your end devices.
      • Talks are planned to be available as video stream in Moodle for two weeks. Streams usually appear within 48 hours after the talks. Please do not write emails regarding stream availability before that time has passed. The same holds for availability of the slides of the speakers.
      • Grading/Assignments: see below
      • For questions and feedback write to kofler@ml.jku.at

  • Lecture Hall Occupancy

    • There is no requirement of physical presence for this course.

      Please use this shared file and carefully read its instructions if you like to be physically present on (some of) the lecture dates:

      Lecture Hall Occupancy

      Do not come to the lecture hall if you have not used this file or if your entry becomes marked in red!

  • Grading / Assignments

      • For each talk there will be an online assignment in Moodle. There is no exam beyond these assignments. Grading is based only on the assignments.
      • You have two weeks time to hand in a plain text file (.txt) with 200 to 400 words length.
      • There are multiple reasons for the restriction to txt files. Other file formats and also submissions via email will not be accepted.
      • File name format: "yourstudentid_numberoflecture.txt", e.g. k09860565_3.txt for lecture 3
      • The file has to contain the following (in English):
      1. First half: Summary with main message of the talk in view of your personal background and interests in AI
      2. Second half: Answer the following questions: Was the talk well suited for the target audience (1st semester Bachelor students)? What did you personally find most interesting and what did you specifically learn? What was less understandable or what would you have liked to hear more about? How did you like the topic and the style of the presentation? And anything else you would like to add.

      • Grading of all reports will take place at the end of the semester. Do not expect marks/feedback on your reports during the semester.
      • Grading will depend on the quality and the number of your reports, not your personal opinions.
      • If there are n assignments during the semster, you need to hand in at least n-1 reports for grade 1 ("Sehr Gut"), and at least n/2 reports for grade 4 ("Genügend"). If you hand in less than n/2 reports, you will get no certificate (and not a negative certificate). Hence, there is no need to deregister from this course, in case you cannot follow it through. No grade will be issued if you hand in less than n/2 reports.
      • Your summary/feedback may be forwarded to the speaker in an anonymized way.

  • Semester Overview

  • Lecture 1 (6 Oct. 2020)

      • Speaker:
        • Prof. Martin Müller
        • Department of Computer Science, University of Alberta, Canada
        • DeepMind Chair in Artificial Intelligence
      • Title:
        • Computer Go - From the Beginnings to AlphaGo and Beyond
      • Abstract:
        • The ancient game of Go has long been used as a test bed for measuring progress in Artificial Intelligence. Decades of work in heuristic search and machine learning culminated in DeepMind's famous AlphaGo programs, which convincingly defeated the world’s strongest human Go players in 2016 and 2017. Current Go programs have greatly surpassed the level of all human experts. In this lecture we introduce the main ideas and technologies behind these programs, following their historical development. In the final part of the talk, we will discuss recent and ongoing work which aims to generalize such approaches to problems that are less well-defined.

    • Due to some late registrations the deadline for assignment 1 is extended until Oct. 27, 14:00 h.

  • Lecture 2 (20 Oct. 2020)

      • Speaker:
        • Prof. Sepp Hochreiter
        • Institute for Machine Learning & LIT AI Lab, Johannes Kepler University Linz, Austria
      • Title:
        • Deep Learning - the Key to Enable Artificial Intelligence
      • Abstract:
        • Deep Learning has emerged as one of the most successful fields of machine learning and artificial intelligence with overwhelming success in industrial speech, language and vision benchmarks. Consequently it became the central field of research for IT giants like Google, facebook, Microsoft, Baidu, and Amazon. Deep Learning is founded on novel neural network techniques, the recent availability of very fast computers, and massive data sets. In its core, Deep Learning discovers multiple levels of abstract representations of the input. The main obstacle to learning deep neural networks is the vanishing gradient problem. The vanishing gradient impedes credit assignment to the first layers of a deep network or early elements of a sequence, therefore limits model selection. Most major advances in Deep Learning can be related to avoiding the vanishing gradient like unsupervised stacking, ReLUs, residual networks, highway networks, and LSTM networks. Currently, LSTM recurrent neural networks exhibit overwhelmingly successes in different AI fields like speech, language, and text analysis. LSTM is used in Google’s translate and speech recognizer, Apple’s iOS 10, facebook’s translate, and Amazon’s Alexa. We use LSTM in collaboration with Zalando and Bayer, e.g. to analyze blogs and twitter news related to fashion and health. In the AUDI Deep Learning Center, which I am heading, and with NVIDIA we apply Deep Learning to advance autonomous driving. In collaboration with Infineon we use Deep Learning for perception tasks, e.g. based on radar sensors. With Deep Learning we won the NIH Tox21 challenge and deploy it to toxicity and target prediction in collaboration with pharma companies like Janssen, Merck, Novartis, AstraZeneca, GSK, Bayer together with hardware-related companies like Intel, HP, Infineon, NVIDIA and others.

  • Lecture 3 (27 Oct. 2020) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Oliver Bimber
        • Institute of Computer Graphics, Johannes Kepler University Linz, Austria
      • Title:
        • Seeing through Forest
      • Abstract:
        • With Airborne Optical Sectioning (AOS), we have introduced a synthetic aperture imaging technique that captures an unstructured light field with a camera drone. Color and thermal images recorded within the shape of a wide (possibly hundreds to thousands of square meters) synthetic aperture area above forest are combined computationally to remove occluders, such as trees and other vegetation. The outcome is a widely occlusion free view of the forest ground. AOS supports full 3D visualization but, in contrast to LiDAR, does not require depth reconstruction. It therefore supports real-time rates at low processing costs. A wide range of applications, such wildlife observation, search and rescue, archaeology, forestry, or harvest assessment have been investigated in the course of many field studies. In this talk, I will report on the achievements and the challenges of the AOS project -- focusing on our recent findings in autonomous people classification for SAR missions, which will soon appear in Nature Machine Intelligence.

  • Lecture 4 (3 Nov. 2020) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Georg Gottlob
        • Department of Computer Science, University of Oxford, UK
      • Title:
        • My adventures with Datalog: Walking the thin line between theory and practice
      • Abstract:
        • I have worked on various subjects, but ever since my years as PostDoc, I have been fascinated by Datalog, a logical programming language for reasoning and database querying. I will start this talk with a short introduction to Datalog, followed by an overview of some theoretical results on Datalog variants. This will be interleaved with a tale of four Datalog-related companies I co-founded: DLVSystem, Lixto, Wrapidity, and DeepReason.ai. I will also comment on the difficulties an academic has to face when spinning out a start-up, and on the satisfaction you may experience when your results are used in practice.

  • Lecture 5 (10 Nov. 2020) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Dr. Hendrik Strobelt
        • IBM Research & MIT-IBM Watson AI Lab, Cambridge, USA
      • Title:
        • Human Interaction and Collaboration with Machine Learning models
      • Abstract:
        • With the increasing adoption of machine learning models across domains, we have to think about the human role when interacting with these models. In the last years, my collaborators and I have created a series of tools that utilize visualization and visual user interaction to help investigate behavior of machine learning models (for NLP and CV). I will present a selection of these scientific tools that makes humans play and then understand.

  • Lecture 6 (17 Nov. 2020) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Elisabeth Lex
        • Institute of Interactive Systems and Data Science, TU Graz, Austria
      • Title:
        • Recent Advances in (Session-based) Job Recommender Systems
      • Abstract:
        • People increasingly use business-oriented social networks such as LinkedIn or XING to attract recruiters and to look for jobs. Users of such networks make an effort to create personal profiles that best describe their skills, interests, and previous work experience. Even with such carefully structured content, it remains a non-trivial task to find relevant jobs. As a consequence, the field of job recommender systems has gained much traction in academia and the  industry. The main challenge that job recommender systems tackle is to retrieve a list of jobs for a user based on her preferences or to generate a list of potential candidates for recruiters based on the job's requirements. Besides, most online job portals offer the option to browse the available jobs anonymously in order to attract users to the portal. As a consequence, the only data a recommender system can exploit are anonymous user interactions with job postings during a session. In this talk, we will discuss ongoing research on job recommender systems. In particular, the use of neural autoencoders will be introduced to infer latent session representations in the form of embeddings, which are used to generate recommendations in a k-nearest-neighbor manner. It will be shown that autoencoders produce more novel and surprising recommendations compared to state-of-the-art baselines in session-based recommender systems.

  • Lecture 7 (1 Dec. 2020) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Martina Seidl
        • Institute for Formal Models and Verification, Johannes Kepler University Linz, Austria
      • Title:
        • Competitions as Scientific Method
      • Abstract:
        • Automated reasoning as successfully applied for software and hardware verification is a complex task. It is supported by advanced reasoning tools like solvers and theorem provers that implement sophisticated techniques for solving provably hard problems. To objectively evaluate the state of the art, research communities organize competitions that distinguish the fast and best tools. Besides providing insights into recent tool developments, such competitions also identify interesting research problems. Thus the outcome of competitions motivates researchers to push the boundaries of their technologies, further improving the state of the art. In this talk, we take a closer look at the role of software  competitions as a scientific method. Based on the examples of some successful competitions, we explain their general setup and how they contribute to the  scientific progress of a research community.

  • Lecture 8 (15 Dec. 2020) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Peter Kovacs
        • Department of Numerical Analysis, University of Budapest, Hungary
      • Title:
        • Towards model-driven neural networks
      • Abstract:
        • Analysis of signals by means of mathematical transformations proved to be an effective method in various aspects, including filtering, system identification, feature extraction, classification, etc. The most widely used method in transform-domain techniques operates with fixed basic functions like trigonometric functions in the Fourier transform, Walsh functions in Walsh–Fourier transform, mother wavelet function for wavelet transforms, etc. These transformations can be used to extract features and to reduce the dimension of the original data. Note that the relevance of the extracted information is in the proper choice of the function system, which also incorporates domain knowledge. However, these handcrafted features are usually suboptimal with respect to the whole learning process. Deep learning (DL) techniques along with representation learning provide good alternatives to extract discriminative information from the raw data. Despite their advantages, DL techniques continue to raise several concerns. Their improved efficiency comes at the cost of losing the explainability. Indeed, due to the large number of nonlinear connections between the model parameters, DL approaches can be considered as black-box methods, where the parameters have no physical meaning and are difficult or impossible to interpret. In this talk, we incorporate the representation abilities of adaptive orthogonal transformations and the prediction abilities of neural networks (NNs) in form of hybrid models. This is a recent trend in signal processing where the mathematical model-based principles and the data-driven machine learning disciplines are combined. In order to demonstrate the potential in these model-driven deep learning techniques, we present two case-studies. First, we consider the problem of thermographic image regression for non-destructive material testing. Then, motivated by the classification of biomedical signals and by the adaptive orthogonal transformations, we introduce VPNet, a novel model-driven NN architecture, which has the advantage of learnable features, interpretable parameters, and compact network structures.

  • Lecture 9 (12 Jan. 2021) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Dr. Doris Allhutter
        • Institute of Technology Assessment, Austrian Academy of Sciences, Vienna, Austria
      • Title:
        • Algorithmic Welfare: Citizen Profiling in the Public Sector
      • Abstract:
        • In recent years, a number of European countries have made attempts to introduce data-based decision-support systems in public job services. To make use of the ‘knowledge in data’, agencies such as the Public Employment Service Austria (AMS) have been working on the algorithmic profiling of job seekers.
        • Starting in 2021, a new semi-automated assistance system (short AMAS) is supposed to calculate the future chances of job seekers on Austria’s labor market. Based on past statistics, job seekers will be classified into three groups, to which different resources for further education are allocated. AMAS looks for connections between job seeker characteristics and successful employment. The characteristics include age, group of countries, gender, education, care obligations and health impairments as well as past employment, contacts with the AMS and the labor market situation in the place of residence. The aim is to invest primarily in those jobseekers for whom the support measures are most likely to lead to reintegration into the labor market. The system is supposed to merely provide the AMS with an additional function in the care of jobseekers. However, the so-called AMS-algorithm has far-reaching consequences for jobseekers, AMS staff and the AMS as a public service institution. 
        • This talk shows how the design of the AMS-algorithm is influenced by technical affordances, and most importantly by social values, norms, and interests of different stakeholders. A discussion of the tensions, challenges and biases that the system entails calls into question the objectivity and neutrality of data claims and of high hopes pinned on evidence-based decision-making. In this way, it sheds light on the coproduction of (semi)automated managerial practices in employment agencies and the framing of unemployment under the paradigmatic transformation of the welfare state to an “enabling state” that aims at mobilizing citizen’s self-responsibility.
        • Doris Allhutter is a senior scientist in science and technology studies at the Institute of Technology Assessment of the Austrian Academy of Sciences. She researches how social inequality and difference co-emerge with sociotechnical systems and explores how practices of computing are implicitly normative and entrenched in societal power relations.

  • Lecture 10 (19 Jan. 2021) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Tega Brain
        • Tandon School of Engineering, New York University, USA
      • Title:
        • Misbehaving Systems: Experiments in Distributing Agency
      • Abstract:
        • Contemporary artists engaging with recent developments in the field of artificial intelligence continue a long history of artistic work that explores the politics and unintended consequences of emerging technologies through strategies like hacking, misuse and open ended experimentation. Within this context I will discuss my own recent work that examines the possibilities and limitations of data and artificial intelligence in environmental inquiry and ecological management. Does AI have the potential to produce new ecological relations or, to quote theorist Donna Haraway, does it risk simply amplifying “an informatics of domination“?
        • Tega Brain is an Australian-born artist and environmental engineer whose work examines issues of ecology, data systems and infrastructure. She has created wireless networks that respond to natural phenomena, systems for obfuscating fitness data, and an online smell-based dating service. She has recently exhibited at the Vienna Biennale for Change, at the Guangzhou Triennial, the Haus der Kulturen der Welt in Berlin, the New Museum, NYC and the Science Gallery in Dublin. Her work has been widely discussed in the press including in the New York Times, Art in America, The Atlantic, NPR, Al Jazeera and The Guardian. Tega is also an Assistant Professor of Integrated Digital Media, New York University. http://www.tegabrain.com 

  • Lecture 11 (26 Jan. 2021) - Lecture will take place only online (use link below), no physical presence

      • Speaker:
        • Prof. Günter Klambauer
        • Institute for Machine Learning & LIT AI Lab, Johannes Kepler University Linz, Austria
      • Title:
        • How neural AIs are changing healthcare, medicine and drug discovery
      • Abstract:
        • Machine Learning methods and especially Deep Learning methods have recently led to a tremendous change in medicine, healthcare and drug discovery. In this lecture, we provide background on those methods and show applications and examples of neural AIs in those Life Science areas.

  • Announcements

  • Course Evaluation

    • Students will have the possibility of anonymous evaluation of this course via Kusss in February. Please take this opportunity!

      In addition, you may always email me any kind of feedback: kofler@ml.jku.at

  • Word Cloud

    • Word cloud generated from all reports; click on picture to view a large version:

    • wordcloud_AI_Ls.png
  • Grade Statistics

    • Grade Count in %
      1 134 63.2
      2 21 9.9
      3 47 22.2
      4 10 4.7
      Total 212 100.0

    • Grades have been issued beginning of March.