All Events

Upcoming Events

Towards Real-World Fact-Checking with Large Language Models
Iryna Gurevych | Technical University of Darmstadt

2024-05-03, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

Misinformation poses a growing threat to our society. It has a severe impact on public health by promoting fake cures or vaccine hesitancy, and it is used as a weapon during military conflicts to spread fear and distrust. Current natural language processing (NLP) fact-checking research focuses on identifying evidence and the veracity of a claim. People’s beliefs however often do not depend on the claim and the rational reasoning as such, but on credible content that makes the claim seem more reliable, such as scientific publications or visual content that was manipulated or stems from unrelated context. To combat misinformation we need to show (1) "Why was the claim believed to be true?", (2) "Why is the claim false?", (3) "Why is the alternative explanation correct?". In the talk, I will zoom into two critical aspects of such misinformation supported by credible though misleading content. First, I will present our efforts to dismantle misleading narratives based on fallacious interpretations of scientific publications. Second, I will show how we can use multimodal large language models to (1) detect misinformation based on visual content, (2) provide strong alternative explanations for the visual content.

Speaker's bio:

Iryna Gurevych (PhD 2003, U. Duisburg-Essen, Germany) is professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. In addition, she is Adjunct Professor at MBZUAI in Abu-Dhabi, UAE and Affiliated Professor at INSAIT, Sofia, Bulgaria. Her main research interests are in machine learning for large-scale language understanding and text semantics. Iryna’s work has received numerous awards such as the ACL fellow 2020, the first-ever Hessian LOEWE Distinguished Chair in 2021 (2,5 mil. Euro) and an ERC Advanced Grant in 2022 (2,5 Mil. Euro). Iryna is co-director of the NLP program within ELLIS, a European network of excellence in machine learning. In 2023, she was the president of the Association for Computational Linguistics (ACL). In 2024, she has been elected a Member of the National Academy of Science Leopoldina.


Recent Events



Making machine learning predictably reliable
Andrew Ilyas | Massachusetts Institute of Technology

2024-04-17, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

Despite ML models' impressive performance, training and deploying them is currently a somewhat messy endeavor. But does it have to be? In this talk, I overview my work on making ML "predictably reliable"---enabling developers to know when their models will work, when they will fail, and why.

To begin, we use a case study of adversarial inputs to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a line of work that aims to develop a precise understanding of the ML pipeline, combining statistical tools with large-scale experiments to characterize the role of each individual design choice: from how to collect data, to what dataset to train on, to what learning algorithm to use.

Speaker's bio:

Andrew Ilyas is a PhD student in Computer Science at MIT, where he is advised by Aleksander Madry and Constantinos Daskalakis. His research aims to improve the reliability and predictability of machine learning systems. He was previously supported by an Open Philanthropy AI Fellowship.



Digital Safety and Security for Survivors of Technology-Mediated Harms
Emily Tseng | Cornell University

2024-03-11, 10:00 - 11:00
Saarbrücken building E1 5, room 002

Abstract:

Platforms, devices, and algorithms are increasingly weaponized to control and harass the most vulnerable among us. Some of these harms occur at the individual and interpersonal level: for example, abusers in intimate partner violence (IPV) use smartphones and social media to surveil and stalk their victims. Others are more subtle, at the level of social structure: for example, in organizations, workplace technologies can inadvertently scaffold exploitative labor practices. This talk will discuss my research (1) investigating these harms via online measurement studies, (2) building interventions to directly assist survivors with their security and privacy; and (3) instrumenting these interventions, to enable scientific research into new types of harms as attackers and technologies evolve. I will close by sharing my vision for centering inclusion and equity in digital safety, security and privacy, towards brighter technological futures for us all.

Speaker's bio:

Emily Tseng is a PhD candidate in Information Science at Cornell University. Her research explores the systems, interventions, and design principles we need to make digital technology safe and affirming for everyone. Emily’s work has been published at top-tier venues in human-computer interaction (ACM CHI, CSCW) and computer security and privacy (USENIX Security, IEEE Oakland). For 6 years, she has served as a researcher-practitioner with the Clinic to End Tech Abuse, where her work has enabled specialized security services for over 600 survivors of intimate partner violence (IPV). Emily is the recipient of a Microsoft Research PhD Fellowship, Rising Stars in EECS, Best Paper Awards at CHI, CSCW, and USENIX Security, and third place in the Internet Defense Prize. She has additionally completed internships at Google and with the Social Media Collective at Microsoft Research. She holds a B.A. from Princeton University.



Designing for Autonomy in Data-Driven AI Systems
Ge Tiffany Wang | University of Oxford

2024-03-07, 10:00 - 11:00
Saarbrücken building E1 5, room 002

Abstract:

As ubiquitous AI becomes increasingly integrated into the smart devices that users use daily, the rise of extensive datafication, data surveillance, and monetized behavioral engineering is becoming increasingly noticeable. Sophisticated algorithms are utilized to perform in-depth analyses of people's data, dissecting it to evaluate personal characteristics, thereby making significant and impactful algorithmic decisions for them. In this evolving digital environment, smart devices are no longer just functional tools; they have become active agents in shaping experiences, transforming lives as algorithmic decisions are etching pathways for people's futures. This trend is particularly concerning for vulnerable groups such as children, young people, and other marginalized communities, who may be disproportionately affected by these technological advancements. My research in human-computer interaction focuses on reimagining these data-driven AI systems to better support user autonomy. To address these challenges, I develop tools and systems that empower users and communities, especially those most vulnerable, to control their own experiences and information directly. These include: 1) human-AI interaction tools that enhance user decision-making power, 2) AI literacy tools for a deeper, critical understanding of data-driven systems, and 3) the development of actionable strategies and frameworks for policymakers and industry leaders to ensure the ethical development and use of AI technologies.

Speaker's bio:

Tiffany Ge Wang is a final year doctoral candidate in Computer Science at the University of Oxford. Her research lies at the intersection of human-computer interaction (HCI) and human-centered privacy & security. Through her research, she aims to empower humans, especially those in at-risk populations, to have autonomy in their dealings with data-driven systems. She has received 7 paper awards at top-tier HCI conferences and journals, including ACM CHI and CSCW. Her research has been cited by influential organizations such as the Council of Europe, the UK ICO, the Australian ICO, and the FTC, and has been featured in news articles in El Pais, among others. She holds a bachelor's degree in Physics from the University of Oxford and an MSc in Information Science from University College London.



Beyond the Search Bar in Human-AI Interactions: Augmenting Discovery, Synthesis, and Creativity With User-Generated Context
Srishti Palani | University of California, San Diego

2024-03-05, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

Searching and exploring information online is integral to everyday life, shaping how we learn, work, and create. As the Web paradigm evolves to include foundational AI models and beyond, we are experiencing a shift in how we search and work. With this transformation in human-AI interaction, it is important to investigate how we might present the user with the right information in the right context, the right representation, and at the right time. In this talk, I will share how I have explored these questions in the context of complex critical information work (such as knowledge discovery, synthesis, and creativity). I present insights about user behaviors and challenges from mixed-method studies observing how people conduct this work using today’s tools. Then, I present novel AI-powered tools and techniques that augment these cognitive processes by mining rich contextual signals from unstructured user-generated artifacts. By deepening our understanding of human cognition and behavior and building tools that understand user contexts more meaningfully, I envision a future where human-AI interactions are more personalized, context-aware, cognitively-convivial, and truly collaborative.

Speaker's bio:

Srishti Palani is a PhD Candidate at the University of California, San Diego. She researches at the intersection of human-computer interaction, cognitive science, and artificial intelligence. She conducts mixed-methods studies to deepen our understanding of how people search, synthesize, and create using vast amounts of disparate information on the Web and Large Language Models. Based on this understanding of user behavior, she develops novel intelligent interaction techniques to augment knowledge discovery, sensemaking, and creativity! Her research has been published at top conferences such as ACM CHI, UIST, CSCW, CHIIR, and SIGIR and has won several prestigious awards, including best paper awards and nominations. Among other honors, she is a Google PhD Research Fellow, Heidelberg Laureate Young Researcher, NCWIT Aspirations in Computing Awardee, and Grace Hopper Research Scholar mentor. During her PhD, she has also worked in top industry research labs such as Microsoft Research, Autodesk Research, and the Allen Institute for AI. Before her PhD, she graduated summa cum laude from Mount Holyoke College, double majoring in Computer Science and Psychology, where her thesis was a CRA Outstanding Undergraduate Researcher Award finalist and awarded the Phi Beta Kappa Research Prize. Outside of research, she is passionate about establishing mentorship programs to bridge the gender gap in tech and teaching computational and design thinking courses to the next generation of innovators.



Data Privacy in the Decentralized Era
Amrita Roy Chowdhury | University of California-San Diego

2024-03-01, 10:00 - 11:00
Saarbrücken building E1 5, room 002

Abstract:

Data is today generated on smart devices at the edge, shaping a decentralized data ecosystem comprising multiple data owners (clients) and a service provider (server). Clients interact with the server with their personal data for specific services, while the server performs analysis on the joint dataset. However, the sensitive nature of the involved data, coupled with inherent misalignment of incentives between clients and the server, breeds mutual distrust. Consequently, a key question arises: How to facilitate private data analytics within a decentralized data ecosystem, comprising multiple distrusting parties?

My research shows a way forward by designing systems that offer strong and provable privacy guarantees while preserving data functionality. I accomplish this by systematically exploring the synergy between cryptography and differential privacy, exposing their rich interconnections in both theory and practice. In this talk, I will focus on two systems, CryptE and EIFFeL, which enable privacy-preserving query analytics and machine learning, respectively.

Speaker's bio:

Amrita Roy Chowdhury is a CRA/CCC CIFellow at University of California-San Diego, working with Prof. Kamalika Chaudhuri. She graduated with her PhD from University of Wisconsin-Madison and was advised by Prof. Somesh Jha. She completed her Bachelor of Engineering in Computer Science from the Indian Institute of Engineering Science and Technology, Shibpur where she was awarded the President of India Gold Medal. Her work explores the synergy between differential privacy and cryptography through novel algorithms that expose the rich interconnections between the two areas, both in theory and practice. She has been recognized as a Rising Star in EECS in 2020 and 2021, and a Facebook Fellowship finalist, 2021. She has also been selected as a UChicago Rising Star in Data Science, 2021.



Methods for Financial Stability Analysis
Christoph Siebenbrunner | WU Vienna

2024-02-28, 15:00 - 16:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

We present a number of methods used by central banks and regulatory authorities to assess the stability of the financial system, including stress tests, network analysis of the interbank market and models of interbank contagion and fire sales, aiming to capture the dynamics similar to those observed during the 2008 financial crisis. We will discuss the key role of banks in the money creation process, how this relates to monetary aggregates and what the introduction of central bank digital currencies and their different implementation options may mean in this context.

--

Please contact the office team for link information

Speaker's bio:

Christoph Siebenbrunner is currently affiliated with the research institute for cryptoeconomics at WU Vienna and the strategy and stress testing unit of the banking supervision department at the Austrian National Bank. All views presented are his own. Christoph has previously worked as a post-doctoral research fellow in Milind Tambe's group in the Computer Science department at Harvard SEAS, focussing on restless mulit-armed bandit models and security games in the Artificial Intelligence for Social Good research stream. Christoph holds a PhD in Mathematics from the University of Oxford, where he has also worked as a college lecturer for Mathematics and Statistics, and a PhD in Management Science with a Finance concentration from TU Vienna.



Computational Approaches to Narrative Analysis
Maria Antoniak | Allen Institute

2024-02-26, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

People use storytelling to persuade, to entertain, to inform, and to make sense of their experiences as a community—and as a form of self-disclosure, personal storytelling can strengthen social bonds, build trust, and support the storyteller’s health. But computational analysis of narratives faces challenges, including difficulty in defining stories, lack of annotated datasets, and need to generalize across diverse settings. In this talk, I’ll present work addressing these challenges that uses methods from natural language processing (NLP) to measure storytelling both across online communities and within specific social contexts. This work has implications for NLP methods for story detection, the literary field of narratology, cooperative work in online communities, and better healthcare support. As one part of my research in NLP and cultural analytics, it also highlights how NLP methods can be used creatively and reliably to study human experiences.

Speaker's bio:

Maria Antoniak is a Young Investigator at the Allen Institute for AI. Her research is in natural language processing (NLP) and cultural analytics, and her interests include using computational methods to study stories, values, and healthcare, often in the setting of online communities, and in measuring the reliability of NLP tools when used for curated datasets and human-focused research questions. She earned her PhD in Information Science from Cornell University, has a master’s degree in Computational Linguistics from the University of Washington, and has been recognized as a "Rising Star" in both computer science and data science.



Global Investigation of Network Connection Tampering
Ramakrishnan Sundara Raman | University of Michigan

2024-02-23, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

As the Internet's user base and criticality of online services continue to expand daily, powerful adversaries like Internet censors are increasingly monitoring and restricting Internet traffic. These adversaries, powered by advanced network technology, perform large-scale connection tampering attacks seeking to prevent users from accessing specific online content, compromising Internet availability and integrity. In recent years, we have witnessed recurring censorship events affecting Internet users globally, with far-reaching social, financial, and psychological consequences, making them important to study. However, characterizing tampering attacks at the global scale is an extremely challenging problem, given intentionally opaque practices by adversaries, varying tampering mechanisms and policies across networks, evolving environments, sparse ground truth, and safety risks in collecting data. In this talk, I will describe my research on building empirical methods to characterize connection tampering globally and investigate the network technology enabling tampering. First, I will introduce novel network measurement methods for locating and examining network devices that perform censorship. Next, I will describe a modular design for the Censored Planet Observatory that enables it to remotely and sustainably measure Internet censorship longitudinally in more than 200 countries. I will introduce time series analysis methods to detect key censorship events in longitudinal Censored Planet data, and reveal global censorship trends. Finally, I will describe exciting ongoing and future research directions, such as building intelligent measurement platforms.

Speaker's bio:

Ram Sundara Raman is a PhD candidate in Computer Science and Engineering at the University of Michigan, advised by Prof. Roya Ensafi. His research lies in the intersection of computer security, privacy, and networking, employing empirical methods to study large-scale Internet attacks. Ram has been recognized as a Rising Star at the Workshop on Free and Open Communications on the Internet (FOCI), and was awarded the IRTF Applied Networking Research Prize in 2023. His work has helped produce one of the biggest active censorship measurement platforms, the Censored Planet Observatory, and has helped prevent large-scale attacks on end-to-end encryption.



High-stakes decisions from low-quality data: AI decision-making for planetary health
Lily Xu | Harvard University

2024-02-21, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

Planetary health is an emerging field which recognizes the inextricable link between human health and the health of our planet. Our planet’s growing crises include biodiversity loss, with animal population sizes declining by an average of 70% since 1970, and maternal mortality, with 1 in 49 girls in low-income countries dying from complications in pregnancy or birth. Underlying these global challenges is the urgent need to effectively allocate scarce resources. My research develops data-driven AI decision-making methods to do so, overcoming the messy data ubiquitous in these settings. Here, I’ll present technical advances in stochastic bandits, robust reinforcement learning, and restless bandits, addressing research questions that emerge from my close collaboration with the public sector. I’ll also discuss bridging the gap from research and practice, including anti-poaching field tests in Cambodia, field visits in Belize and Uganda, and large-scale deployment with SMART conservation software.

Speaker's bio:

Lily Xu is a computer science PhD student at Harvard developing AI techniques to address planetary health challenges. She focuses on advancing methods in machine learning, large-scale planning, and causal inference. Her work building the PAWS system to predict poaching hotspots has been deployed in multiple countries and is being scaled globally through integration with SMART conservation software. Lily co-organizes the Mechanism Design for Social Good (MD4SG) research initiative and serves as AI Lead for the SMART Partnership. Her research has been recognized with best paper runner-up at AAAI, the INFORMS Doing Good with Good OR award, a Google PhD Fellowship, and a Siebel Scholarship.



Paths to AI Accountability
Sarah Cen | Massachusetts Institute of Technology

2024-02-19, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

In the past decade, we have begun grappling with difficult questions related to the rise of AI, including: What rights do individuals have in the age of AI? When should we regulate AI and when should we abstain? What degree of transparency is needed to monitor AI systems? These questions are all concerned with AI accountability: determining who owes responsibility and to whom in the age of AI. In this talk, I will discuss the two main components of AI accountability, then illustrate them through a case study on social media. Within the context of social media, I will focus on how social media platforms filter (or curate) the content that users see. I will review several methods for auditing social media, drawing from concepts and tools in hypothesis testing, causal inference, and LLMs.

Speaker's bio:

Sarah is a final-year PhD student at MIT in the Electrical Engineering and Computer Science Department advised by Professor Aleksander Mądry and Professor Devavrat Shah. Sarah utilizes methods from machine learning, statistical inference, causal inference, and game theory to study responsible computing and AI policy. Previously, she has written about social media, trustworthy algorithms, algorithmic fairness, and more. She is currently interested in AI auditing, AI supply chains, and IP Law x Gen AI.



Programming Theory in Security Analysis: A Tripartite Framework for Vulnerability Specification
Yinxi Liu | Chinese University of Hong Kong

2024-02-15, 10:00 - 11:00
Saarbrücken building E1 5, room 002

Abstract:

Living in a computer-reliant era, we’re balancing the power of computer systems with the challenges of ensuring their functional correctness and security. Program analysis has proven successful in addressing these issues by predicting the behavior of a system when executed. However, the complexity of conducting program analysis significantly arises as modern applications employ advanced, high-level programming languages and progressively embody the structure of a composite of independent modules that interact in sophisticated manners. In this talk, I will detail how to apply programming language theory to construct refined vulnerability specifications and reduce the complexity of program analysis across computational, conformational, and compositional aspects: - My primary focus will be on introducing some formal specifications that I have developed for modeling the common exponential worst-case computational complexity inherent in modern programming languages. These specifications have guided the first worst-case polynomial solution for detecting performance bugs in regexes. - I will also briefly discuss why generating inputs with complex conformation to target deep-seated bugs is a significant obstacle for existing techniques, and how I devised strategies to generate more sound input by intentionally satisfying previously unrecognized forms of dependencies. - Finally, as part of a vision to enhance security analysis in modern distributed systems, where different operations can be composed in a complex way and may interleave with each other, I will briefly discuss my efforts to establish new security notions to identify non-atomic operations in smart contracts and deter any potential attacks that might exploit their interactions.

Speaker's bio:

Yinxi Liu is completing her Ph.D. in Computer Science and Engineering at the Chinese University of Hong Kong in July 2024. Her research interests are in the field of computer security, where she constructs refined vulnerability specifications to reduce the complexity of security analysis across computational, conformational, and compositional aspects, as well as develops program analysis techniques that have found hundreds of real-world bugs. In the last year of her Ph.D. study, Yinxi had a research visit at Southern University of Science and Technology. She received Microsoft Research Asia Fellowship Nomination Award in 2022. Her research has been published in prestigious security conferences IEEE S&P and ACM CCS, and has been recognized by Best Software Artifact Nomination Award at ASE 2021.



Formal Reasoning about Relational Properties in Large-Scale Systems
Jana Hofmann | Azure Research

2024-02-14, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

Establishing strong guarantees for security-critical systems has never been more challenging. On the one hand, systems become increasingly complex and intricate. On the other hand, many security requirements are relational, i.e., they compare several execution traces. Examples are noninterference properties, program indistinguishability, and even algorithmic fairness. Due to their relational nature, formal reasoning about such properties quickly blows up in complexity. In this talk, I discuss the necessity to scale relational reasoning to larger software and black-box systems and present two of our recent contributions to tackle this challenge. In the first part, I focus on formal algorithms for white-box systems and show how to combine relational reasoning with non-relational specifications to enable the synthesis of smart contract control flows. In the second part, I focus on relational testing of black-box systems and illustrate its use in modeling and detecting microarchitectural information leakage in modern CPUs.

Speaker's bio:

Jana Hofmann is a postdoctoral researcher at Azure Research, Microsoft. She obtained her PhD in 2022 at CISPA/Saarland University, for which she was awarded the university’s Dr.-Eduard-Martin Prize for best computer science thesis of the year. Her research interests lie in the intersection of formal methods and security with publications at LICS, CAV, USENIX Security, and CSF. She develops formal algorithms and testing methods for relational properties with a current focus on microarchitectural information leakage.



Reliable Measurement for Machine Learning at Scale
A. Feder Cooper | Cornell University

2024-02-08, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

We need reliable measurement in order to develop rigorous knowledge about ML models and the systems in which they are embedded. But reliable measurement is a really hard problem, touching on issues of reproducibility, scalability, uncertainty quantification, epistemology, and more. In this talk, I will discuss the criteria needed to take reliability seriously — criteria for designing meaningful metrics, and for methodologies that ensure that we can dependably and efficiently measure these metrics at scale and in practice. I will give two examples of my research that put these criteria into practice: (1) large-scale evaluation of training-data memorization in large language models, and (2) evaluating latent arbitrariness in algorithmic fairness binary classification contexts. Throughout this discussion, I will emphasize how important it is to make metrics understandable for other stakeholders in order to facilitate public governance. For this reason, my work aims to design metrics that are legally cognizable — a goal that frames the both my ML and legal scholarship. I will draw on important connections that I have uncovered between ML and law: connections between (1) the generative-AI supply chain and US copyright law, and (2) ML arbitrariness and arbitrariness in legal rules. This talk reflects joint work with collaborators at The GenLaw Center, Cornell CS, Cornell Law School, Google DeepMind, and Microsoft Research.

Speaker's bio:

A. Feder Cooper is a scalable machine-learning (ML) researcher, working on reliable measurement and evaluation of ML. Cooper’s research develops nuanced quality metrics for ML behaviors, and makes sure that we can effectively measure these metrics at scale and in practice. Cooper’s contributions span distributed training, hyperparameter optimization, uncertainty estimation, model selection, and generative AI. To make sure that our evaluation metrics can meaningfully measure our goals for ML, Cooper also leads research in tech policy and law, and spends a lot of time working to effectively communicate the capabilities and limits of AI/ML to the broader public. Cooper is a CS Ph.D. candidate at Cornell University, an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University, co-founder of The Center for Generative AI, Law, and Policy Research (The GenLaw Center), and a student researcher at Google Research. Cooper has received many spotlight and oral awards at top conferences, including NeurIPS, AAAI, and AIES, and was named a "Rising Star in EECS" by MIT.



Causal Inference for Robust, Reliable, and Responsible NLP
Zhijing Jin | MPI-IS & ETH

2024-02-06, 10:00 - 11:00
Saarbrücken building E1 5, room 002

Abstract:

Despite the remarkable progress in large language models (LLMs), it is well-known that natural language processing (NLP) models tend to fit for spurious correlations, which can lead to unstable behavior under domain shifts or adversarial attacks. In my research, I develop a causal framework for robust and fair NLP, which investigates the alignment of the causality of human decision-making and model decision-making mechanisms. Under this framework, I develop a suite of stress tests for NLP models across various tasks, such as text classification, natural language inference, and math reasoning; and I propose to enhance robustness by aligning model learning direction with the underlying data generating direction. Using this causal inference framework, I also test the validity of causal and logical reasoning in models, with implications for fighting misinformation, and also extend the impact of NLP by applying it to analyze the causality behind social phenomena important for our society, such as causal analysis of policies, and measuring gender bias in our society. Together, I develop a roadmap towards socially responsible NLP by ensuring the reliability of models, and broadcasting its impact to various social applications.

Speaker's bio:

Zhijing Jin (she/her) is a Ph.D. at Max Planck Institute & ETH. Her research focuses on socially responsible NLP by causal inference. Specifically, she works on expanding the impact of NLP by promoting NLP for social good, and developing CausalNLP to improve robustness, fairness, and interpretability of NLP models, as well as analyze the causes of social problems. She has published at many NLP and AI venues (e.g., ACL, EMNLP, NAACL, NeurIPS, AAAI, AISTATS). Her work has been featured in MIT News, ACM TechNews, and Synced. She is actively involved in AI for social good, as the co-organizer of three NLP for Positive Impact Workshops (at ACL 2021, EMNLP 2022, and EMNLP 2024), Moral AI Workshop at NeurIPS 2023, and RobustML Workshop at ICLR 2021. To support the NLP research community, she organizes the ACL Year-Round Mentorship Program. To foster the causality research community, she organized the Tutorial on CausalNLP at EMNLP 2022, and served as the Publications Chair for the 1st conference on Causal Learning and Reasoning (CLeaR). More information can be found on her personal website: zhijing-jin.com



Towards Ethical and Democratic Design of AI
Tanusree Sharma | University of Illinois at Urbana Champaign

2024-02-05, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

Advancements in Artificial Intelligence (AI) are impacting our lives, raising concerns related to data collection, and social alignment to the resilience of AI models. A major criticism of AI development is the lack of transparency in design and decision-making about AI behavior, potentially leading to adverse outcomes such as discrimination, lack of inclusivity and representation, breaching legal rules, and privacy and security risks. Underserved populations, in particular, can be disproportionately affected by these design decisions. Conventional approaches in soliciting people’s input, such as interviews, surveys, and focus groups, have limitations, such as often lacking consensus, coordination, and regular engagement. In this talk, I will present two examples of sociotechnical interventions for democratic and ethical AI. First, to address the need for ethical dataset creation for AI development, I will present a novel method "BivPriv," drawing ideas from accessible computing and computer vision in creating an inclusive private visual dataset with blind users as contributors. Then I will discuss my more recent work on "Inclusive.AI" funded by OpenAI to address concerns of social alignment by facilitating a democratic platform with decentralized governance mechanisms for scalable user interaction and integrity in the decision-making processes related to AI.

Speaker's bio:

Tanusree Sharma is a Ph.D. candidate of Informatics at the University of Illinois at Urbana Champaign, advised by Yang Wang. She works at the intersection of usable security and privacy, human-centered AI, and decentralized governance, where she uses human-centered methods to design, build, and study systems to address issues around power imbalances in technology design and transparency in complex socio-technical systems (e.g., AI). Tanusree has authored more than 15 publications in premier academic venues across HCI, security, and privacy (e.g., Nature, ACM CHI, Usenix Security). Tanusree’s work is supported by NSF, Meta, and OpenAI. She is awarded the OpenAI "Democratic Input to AI" Grant as part of her dissertation. Her work has been covered by media outlets such as Nature and Forbes. Her research is deeply influenced by her upbringing in her home country, Bangladesh. You can find out more about Tanusree at https://tanusreesharma.github.io/



Theoretically Sound Cryptography for Key Exchange and Advanced Applications
Doreen Riepel | UC San Diego

2024-02-01, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

Today, nearly all internet connections are established using a cryptographic key exchange protocol. Fo r these protocols, we want to guarantee security even if an adversary can control the protocol’s execution or secr ets are leaked. A security analysis takes this into account and provides a mathematical proof relying on computati onal problems that are believed to be hard to solve. In this talk, I will first give an overview of the security p roperties of authenticated key exchange protocols and how to achieve them using cryptographic building blocks. I w ill talk about tight security and the role of idealized models such as the generic group model. In addition to cla ssical Diffie-Hellman based key exchange, I will also present recent results on isogeny-based key exchange, a prom ising candidate for post-quantum secure cryptography. Finally, I will touch upon examples of advanced cryptographi c primitives like ratcheted key exchange for secure messaging.

Speaker's bio:

Doreen Riepel is a postdoctoral researcher at UC San Diego working with Mihir Bellare. Her research focuses on the theoretical foundations of applied cryptography and in particular on provable security. She completed her PhD at Ruhr University Bochum where she was advised by Eike Kiltz and funded by the DFG Cluster of Excellence "Cyber Security in the Age of Large-Scale Adversaries" (CASA). During her PhD, she worked on the design and analysis of key exchange protocols. She interned at NTT Research, where she worked on attribute-based encryption.



New Algorithmic Tools for Rigorous Machine Learning Security Analysis
Teodora Baluta | National University of Singapore

2024-01-30, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

Machine learning security is an emerging area with many open questions lacking systematic analysis. In this talk, I will present three new algorithmic tools to address this gap: (1) algebraic proofs; (2) causal reasoning; and (3) sound statistical verification. Algebraic proofs provide the first conceptual mechanism to resolve intellectual property disputes over training data. I show that stochastic gradient descent, the de-facto training procedure for modern neural networks, is a collision-resistant computation under precise definitions. These results open up connections to lattices, which are mathematical tools used for cryptography presently. I will also briefly mention my efforts to analyze causes of empirical privacy attacks and defenses using causal models, and to devise statistical verification procedures with ‘probably approximately correct’ (PAC)-style soundness guarantees.

Speaker's bio:

Teodora Baluta is a Ph.D. candidate in Computer Science at the National University of Singapore. She enjoys working on security problems that are both algorithmic in nature and practically relevant. She is one of the EECS Rising Stars 2023, a Google PhD Fellow, a Dean’s Graduate Research Excellence Award recipient and a President’s Graduate Fellowship recipient at NUS. She interned at Google Brain working in the Learning for Code team. Her works are published in security (CCS, NDSS), programming languages/verification conferences (OOPSLA, SAT), and software engineering conferences (ICSE, ESEC/FSE). More details are available on her webpage: https://teobaluta.github.io/.



Oblivious Algorithms for Privacy-Preserving Computations
Sajin Sasy | University of Waterloo

2024-01-25, 10:00 - 11:00
Bochum building MPI-SP

Abstract:

People around the world use data-driven online services every day. However, data center attacks and data breaches have become a regular and rising phenomenon. How, then, can one reap the benefits of data-driven statistical insights without compromising the privacy of individuals' data? In this talk, I will first present an overview of three disparate approaches towards privacy-preserving computations today, namely homomorphic cryptography, distributed trust, and secure hardware. These ostensibly unconnected approaches have one unifying element: oblivious algorithms. I will discuss the relevance and pervasiveness of oblivious algorithms in all the different models for privacy-preserving computations. Finally, I highlight the performance and security challenges in deploying such privacy-preserving solutions, and present three of my works that mitigate these obstacles through the design of novel efficient oblivious algorithms.

Speaker's bio:

Sajin Sasy is a PhD candidate in the Cryptography, Security, and Privacy (CrySP) group at the University of Waterloo, advised by Ian Goldberg. His work focuses on improving the security and privacy of individuals' data and communications online, through research spanning the fields of cryptography, design and analysis of algorithms, distributed systems, and machine learning. In particular, his work presents novel privacy-preserving computation protocols that improve asymptotics, underlying constants, wall-clock time, and parallelizability over state-of-the-art solutions. His works have been published in top-tier systems security venues (ACM CCS and NDSS), privacy venues (PoPETs), and in AI and machine learning venues (NeurIPS and AAAI). His research is supported by an NSERC Collaborative Research and Development Grant with the Royal Bank of Canada.



Cocon: A Type-Theoretic Framework for Certified Meta-programming
Brigitte Pientka | McGill University

2023-12-15, 09:00 - 10:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Meta-programming is the art of writing programs that produce or manipulate other programs. This allows programmers to automate error-prone or repetitive tasks, and exploit domain-specific knowledge to customize the generated code. Hence, meta-programming is widely used in a range of technologies: from cryptographic message authentication in secure network protocols to supporting reflection in proof environments such as Lean or Coq.   Unfortunately, writing safe meta-programs remains very challenging and sometimes frustrating, as traditionally errors in the generated code are only detected when running it, but not at the time when code is generated. To make it easier to write and maintain meta-programs, tools that allow us to detect errors during code generation -- instead of when running the generated  code — are essential.   This talk revisits Cocon, a Martin-Loef dependent type theory for defining logics and proofs, as a framework for certified meta-programming. Cocon allows us to represent domain-specific languages (DSL) within the logical framework LF and in addition write recursive meta-programs and proofs about those DSLs. In particular, we can embed into LF STLC or System F, or even MLTT itself and then write programs about those encodings using Cocon itself. This means Cocon can be viewed as a type-theoretic framework for certified meta-programming.   This work revisits the LICS'19 paper "A Type Theory for Defining Logics" by Brigitte Pientka, David Thibodeau, Andreas Abel, Francisco Ferreira, Rébecca Zucchini and reframes it as a foundation for meta-programming. It highlights what is necessary to use Cocon as a type-theoretic foundation for certified meta-programming and how to build such a certified meta-programming system from the ground up.

This is joint work with Jason Z. Hu and Junyoung Jang.

---

Please contact the Office team for Zoom link information.

Speaker's bio:

-



The Never-Ending Trace: An Under-Approximate Approach to Divergence Bugs
Caroline Cronjäger | Vrije Universiteit Amsterdam

2023-12-04, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 607 / Meeting ID: -

Abstract:

The goal of under-approximating logics, such as incorrectness logic, is to reason about the existence of bugs  by specifying a subset of possible program behaviour. While this approach allows for the specification of a broad range of bugs, it does not account for divergence bugs (such as infinite loops), since the nature of the triple itself does not allow statements about infinite execution traces.     To fill this gap, I will present a divergent variant of the forward under-approximating (FUA) triple. This new separation logic reasons functionally compositionally about divergence bugs and is under-approximating in the sense that all specified bugs are true bugs, that is, reachable. It can detect divergence originating from loops, recursive function calls and goto-cycles.     I will motivate the talk by defining the type of divergent programs we are aiming to reason about, and show how previously introduced logics fall short of specifying their divergent behaviour. After introducing the FUA triple, I will outline how this triple differs from traditional over-approximating  and under-approximating approaches such as (partial) Hoare logic and incorrectness logic. Finally, I will discuss the mechanisms within the FUX framework that enable reasoning about divergence, with a focus on how to prove divergence arising from goto-cycles.

--

Please contact the office team for the Zoom link details.

Speaker's bio:

During and after my bachelor's degree in mathematics and computer science I have worked on verification and program logics through multiple research internships. While working at Imperial College London in 2021, I worked on the soundness proof of under-approximate reasoning about function calls, and co-authored a paper on Exact Separation Logic. My most recent work focuses on providing a logic that can prove the existence on non-terminating execution traces without false-positive bug reports. I get excited about proofs and all things mathy.



The complexity of Presburger arithmetic with power or powers
Dmitry Chistikov | University of Warwick

2023-11-14, 11:00 - 12:00
Kaiserslautern building G26, room 111

Abstract:

Presburger arithmetic, or linear integer arithmetic, is known to have decision procedures that work in triply exponential time.

Jointly with M. Benedikt (Oxford) and A. Mansutti (IMDEA Software Institute), we have recently considered two decidable extensions of Presburger arithmetic: with the power function and with the predicate for the set of powers of 2. No elementary decision procedures were known for these two theories.

In this talk, I will introduce this work and outline ideas behind our results. Namely, we have shown that existence of solutions over N to systems of linear equations and constraints of the form $y = 2^x$ can be decided in nondeterministic exponential time. Also, linear integer arithmetic extended with a predicate for powers of 2 can be decided in triply exponential time.

(Based on a paper in ICALP'23.)

Speaker's bio:

-



Exposing Concurrency Bugs from their Hiding Places
Umang Mathur | National University of Singapore

2023-10-26, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Concurrent programs are notoriously hard to write correctly, as scheduling nondeterminism introduces subtle errors that are both hard to detect and to reproduce.

Despite rigorous testing, concurrency bugs such as races conditions often find their way into production software, and manifest as critical security issues. Consequently, considerable effort has been made towards developing efficient techniques for detecting them automatically.

The preferred approach to detect data races is through dynamic analysis, where one executes the software with some test inputs, and checks for the presence of bugs in the execution observed.

Traditional bug detectors however are often unable to discover simple bugs present in the underlying software, even after executing the program several times, because these bugs are sensitive to thread scheduling.

In this talk, I will discuss how runtime predictive analysis can help. Runtime predictive analyses aim to expose concurrency bugs, that can be otherwise missed by traditional dynamic analysis techniques (such as the race detector TSan), by inferring the presence of these bugs in alternate executions of the underlying software, without explicitly re-executing the software program.

I will talk about the fundamentals of and recent algorithmic advances for building highly scalable and sound predictive analysis techniques.

Speaker's bio:

Umang Mathur is a Presidential Young Professor at the National University of Singapore. He received his PhD from the University of Illinois at Urbana Champaign and was an NTT Research Fellow at the Simons Institute for the Theory of Computing at Berkeley. His research broadly centers on developing techniques inspired from formal methods and logic for answering design, analysis and implementation questions in programming languages, software engineering and systems. He has received a Google PhD Fellowship, an ACM SIGSOFT Distinguished Paper Award at ESEC/FSE'18. Best Paper Award at ASPLOS'22 and an ACM SIGPLAN Distinguished Paper Award at POPL'23 for his work on designing techniques and tools for analyzing concurrent software. More details can be found at: https://www.comp.nus.edu.sg/~umathur/



Naturalness & Bimodality of Code
Prem Devanbu | University California, Davis

2023-10-18, 10:00 - 11:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

After discovering, back in 2011, that Language Models are useful for modeling repetitive patterns in source code (c.f. The "Naturalness" of software ), and exploring some applications thereof, more recently (since about 2019) our group at UC Davis has focused on the observation that Software, as usually written, is bimodal, admitting both the well-known formal, deterministic semantics (mostly for machines) and probabilistic, noisy semantics (for humans). This bimodality property affords both new approaches to software tool construction (using machine-learning) and new ways of studying human code reading. In this talk, I'll give an overview of the Naturalness/Bimodality program, and some recent work we have done on calibrating the quality of code produced by large language models, and also on "bimodal prompting".

--

Please contact the office team for link information

Speaker's bio:

Prem Devanbu holds a B.Tech from IIT Madras, and a Ph.D from Rutgers University. After circa 20 years at Bell Labs, he joined UC Davis, where he is now a Distinguished Research Professor of Computer Science. He works in Empirical Software Engineering, and AI for SE, specifically exploiting "big data" available in software repositories to support software development. He is a winner of  the ACM SIGSOFT Outstanding Research Award (2021), and the Alexander von Humboldt Research Award (2022), and several best-paper,  most-influential paper, and test-of-time awards. He is a Fellow of the ACM.



Algorithms for Plurality
Smitha Milli | Cornell Tech

2023-10-17, 10:00 - 11:00
Saarbrücken building E1 5, room 029

Abstract:

Machine learning algorithms curate much of the content we encounter online. However, there is concern that these algorithms may unintentionally amplify hostile discourse and perpetuate divisive 'us versus them' mentalities. How can we re-engineer algorithms to bridge diverse perspectives and facilitate constructive conflict? First, I will discuss results from our randomized experiment measuring effects of Twitter’s engagement-based ranking algorithm on downstream sociopolitical outcomes like the amplification of divisive content and users’ perceptions of their in-group and out-group. Crucially, we found that an alternative ranking, based on users’ stated preferences rather than their engagement, reduced amplification of negative, partisan, and out-group hostile content. Second, I will discuss how we applied these insights in practice to design an objective function for algorithmic ranking at Twitter. The core idea to the approach is to interpret users' actions in a way that is consistent with their stated, reflective preferences. Finally, I will discuss lessons learned and open questions for designing algorithms that support a plurality of viewpoints, with an emphasis on the emerging paradigm of bridging-based ranking.

Speaker's bio:

Smitha Milli is a Postdoctoral Associate at Cornell Tech. They received their BS and PhD in Electrical Engineering & Computer Science from UC Berkeley where they were supported by an NSF Graduate Research Fellowship and an Open Philanthropy AI Fellowship. The goal of their research is to create algorithms that help bridge diverse perspectives and facilitate constructive conflict. To do so, they employ a range of methodologies ranging from randomized trials akin to those in the social sciences, to the creation of novel machine learning algorithms, to economic game-theoretic analyses of socio-technical phenomena. Beyond academic outlets, Smitha’s work has been discussed in live television on ABC7 News, in articles published by Tech Policy Press and the Knight First Amendment Institute on the effects of social media, and in testimony to the House Financial Services Committee.



Software Engineering for Data Intensive Scalable Computing and Heterogeneous Computing
Miryung Kim | UCLA

2023-09-28, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

With the development of big data, machine learning, and AI, existing software engineering techniques must be re-imagined to provide the productivity gains that developers desire. Furthermore, specialized hardware accelerators like GPUs or FPGAs have become a prominent part of the current computing landscape. However, developing heterogeneous applications is limited to a small subset of programmers with specialized hardware knowledge. To improve productivity and performance for data-intensive and compute-intensive development, now is the time that the software engineering community should design new waves of refactoring, testing, and debugging tools for big data analytics and heterogeneous application development.

In this talk, we overview software development challenges in this new data-intensive scalable computing and heterogeneous computing domain. We describe examples of automated software engineering (debugging, testing, and refactoring) techniques that target this new domain and share lessons learned from building these techniques.

Speaker's bio:

Miryung Kim is a Professor and Vice Chair of Graduate Studies in the Department of Computer Science at UCLA. Her current research focuses on software developer tools for data-intensive scalable computing and heterogeneous computing. Her group created automated testing and debugging for Apache Spark and conducted the largest scale study of data scientists in industry. Her group's Java bytecode debloating JDebloat made a tech transfer impact to the Navy.

She produced 6 professors (Columbia, Purdue, two at Virginia Tech, etc). For her impact on nurturing the next generation of academics, she received the ACM SIGSOFT Influential Educator Award. She was a Program Co-Chair of FSE 2022. She was a Keynote Speaker at ASE 2019 and ISSTA 2022. She gave Distinguished Lectures at CMU, UIUC, UMN, UC Irvine, etc. She is a recipient of 10 Year Most Influential Paper Award from ICSME twice, an NSF CAREER award, a Microsoft Software Engineering Innovation Foundation Award, an IBM Jazz Innovation Award, a Google Faculty Research Award, an Okawa Foundation Research Award, and a Humboldt Fellowship from Alexander von Humboldt Foundation. She is an ACM Distinguished Member.



Robust and Equitable Uncertainty Estimation
Aaron Roth | University of Pennsylvania

2023-09-27, 14:00 - 15:00
Virtual talk

Abstract:

Machine learning provides us with an amazing set of tools to make predictions, but how much should we trust particular predictions? To answer this, we need a way of estimating the confidence we should have in particular predictions of black-box models. Standard tools for doing this give guarantees that are averages over predictions. For instance, in a medical application, such tools might paper over poor performance on one medically relevant demographic group if it is made up for by higher performance on another group. Standard methods also depend on the data distribution being static — in other words, the future should be like the past.

In this talk, I will describe new techniques to address both these problems: a way to produce prediction sets for arbitrary black-box prediction methods that have correct empirical coverage even when the data distribution might change in arbitrary, unanticipated ways and such that we have correct coverage even when we zoom in to focus on demographic groups that can be arbitrary and intersecting. When we just want correct group-wise coverage and are willing to assume that the future will look like the past, our algorithms are especially simple.

This talk is based on two papers, that are joint work with Osbert Bastani, Varun Gupta, Chris Jung, Georgy Noarov, and Ramya Ramalingam.

Please contact the office team for link information

Speaker's bio:

Aaron Roth is the Henry Salvatori Professor of Computer and Cognitive Science, in the Computer and Information Sciences department at the University of Pennsylvania, with a secondary appointment in the Wharton statistics department. He is affiliated with the Warren Center for Network and Data Science, and co-director of the Networked and Social Systems Engineering (NETS) program. He is also an Amazon Scholar at Amazon AWS. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and research awards from Yahoo, Amazon, and Google. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory, learning theory, and machine learning. Together with Cynthia Dwork, he is the author of the book "The Algorithmic Foundations of Differential Privacy." Together with Michael Kearns, he is the author of "The Ethical Algorithm".



AI as a resource: strategy, uncertainty, and societal welfare
Kate Donahue | Cornell University

2023-09-27, 10:00 - 11:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Kaiserslautern building E1 5, room 029

Abstract:

In recent years, humanity has been faced with a new resource - artificial intelligence. AI can be a boon to society, or can also have negative impacts, especially with inappropriate use. My research agenda studies the societal impact of AI, particularly focusing on AI as a resource and on the strategic decisions that agents make in deciding how to use it. In this talk, I will consider some of the key strategic questions that arise in this framework: the decisions that agents make in jointly constructing and sharing AI models, and the decisions that they make in dividing tasks between their own expertise and the expertise of a model. The first of these questions has motivated my work on "model-sharing games", which models scenarios such as federated learning or data cooperatives. In this setting, we view agents with data as game-theoretic players and analyze questions of stability, optimality, and fairness (https://arxiv.org/abs/2010.00753, https://arxiv.org/abs/2106.09580, https:// arxiv.org/abs/2112.00818). Secondly, I will describe some of my ongoing work in modeling human-algorithm collaboration. In particular, I will describe work on best-item recovery in categorical prediction, showing how differential accuracy rates and anchoring on algorithmic suggestions can influence overall performance (https://arxiv.org/abs/2308.11721).

Speaker's bio:

Kate Donahue is a sixth year computer science PhD candidate at Cornell advised by Jon Kleinberg. She works on algorithmic problems relating to the societal impact of AI such as fairness, human/AI collaboration and game-theoretic models of federated learning. Her work has been supported by an NSF fellowship and recognized by a FAccT Best Paper award. During her PhD, she has interned at Microsoft Research, Amazon, and Google.



Privacy Auditing and Protection in Large Language Models
Fatemehsadat Mireshghallah | University of Washington

2023-09-18, 10:00 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Large language Models (LLMs, e.g., GPT-3, OPT, TNLG,…) are shown to have a remarkably high performance on standard benchmarks, due to their high parameter count, extremely large training datasets, and significant compute. Although the high parameter count in these models leads to more expressiveness, it can also lead to higher memorization, which, coupled with large unvetted, web-scraped datasets can cause different negative societal and ethical impacts such as leakage of private, sensitive information and generation of harmful text. In this talk, we will go over how these issues affect the trustworthiness of LLMs, and zoom in on how we can measure the leakage and memorization of these models, and mitigate it through differentially private training. Finally we will discuss what it would actually mean for LLMs to be privacy preserving, and what are the future research directions on making large models trustworthy.

Speaker's bio:

Fatemehsadat Mireshghallah is a post-doctoral scholar at the Paul G. Allen Center for Computer Science & Engineering at University of Washington. She received her Ph.D. from the CSE department of UC San Diego in 2023. Her research interests are Trustworthy Machine Learning and Natural Language Processing. She is a recipient of the National Center for Women & IT (NCWIT) Collegiate award in 2020 for her work on privacy-preserving inference, a finalist of the Qualcomm Innovation Fellowship in 2021 and a recipient of the 2022 Rising star in Adversarial ML award.



Next-Generation Optical Networks for Machine Learning Jobs
Manya Ghobadi | MIT

2023-09-04, 10:00 - 11:00
Kaiserslautern building G26, room 111

Abstract:

In this talk, I will explore three elements of designing next-generation machine learning systems: congestion control, network topology, and computation frequency. I will show that fair sharing, the holy grail of congestion control algorithms, is not necessarily desirable for deep neural network training clusters. Then I will introduce a new optical fabric that optimally combines network topology and parallelization strategies for machine learning training clusters. Finally, I will demonstrate the benefits of leveraging photonic computing systems for real-time, energy-efficient inference via analog computing. I will discuss that pushing the frontiers of optical networks for machine learning workloads will enable us to fully harness the potential of deep neural networks and achieve improved performance and scalability.

Speaker's bio:

Manya Ghobadi is faculty in the EECS department at MIT. Her research spans different areas in computer networks, focusing on optical reconfigurable networks, networks for machine learning, and high-performance cloud infrastructure. Her work has been recognized by the ACM-W Rising Star award, Sloan Fellowship in Computer Science, ACM SIGCOMM Rising Star award, NSF CAREER award, Optica Simmons Memorial Speakership award, best paper award at the Machine Learning Systems (MLSys) conference, as well as the best dataset and best paper awards at the ACM Internet Measurement Conference (IMC). Manya received her Ph.D. from the University of Toronto and spent a few years at Microsoft Research and Google before joining MIT.



On Synthesizability of Skolem Functions in First-Order Theories
Supratik Chakraborty | IIT Bombay

2023-06-20, 15:00 - 16:00
Kaiserslautern building G26, room 207 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Given a sentence $\forall X \exists Y \varphi(X, Y)$ in a first-order theory, it is well-known that there exists a function $F(X)$ for $Y$ in $\varphi$ such that $\exists Y \varphi(X, Y)\leftrightarrow \varphi(X, F(X))$ holds for all values of the universal variables X. Such a function is also called a Skolem function, in honour of Thoralf Skolem who first made us of this in proving what are now known as the Lowenheim-Skolem theorems. The existence of a Skolem function for a given formula is technically analogous to the Axiom of Choice -- it doesn't give us any any hint about how to compute the function, although we know such a function exists. Nevertheless, since Skolem functions are often very useful in practical applications (like finding a strategy for a reactive controller), we investigate when is it possible to algorithmically construct a Turing machine that computes a Skolem function for a given first-order formula. We show that under fairly relaxed conditions, this cannot be done. Does this mean the end of the road for automatic synthesis of Skolem functions? Fortunately, no. We show model-theoretic necessary and sufficient condition for the existence and algorithmic synthesizability of Turing machines implementing Skolem functions. We show that several useful first-order theories satisfy these conditions, and hence admit algorithms that can synthesize Turing machines implementing Skolem functions. We conclude by presenting several open problems in this area.

Speaker's bio:

Supratik Chakraborty is Bajaj Group Chair Professor in the Department of Computer Science and Engineering at IIT Bombay. His research interests include applications of formal methods to the verification, synthesis and analysis of complex systems, including hardware, software and machine-learning enabled systems. He also works on constrained sampling and counting and their applications, and on automata theory and logic. Supratik is a Distinguished Member of ACM, a Fellow of Indian National Academy of Engineering and a recipient of the Distinguished Alumnus Award of IIT Kharagpur.



Scalable and Sustainable Data-Intensive Systems
Bo Zhao | Lecturer Assistant Professor in Computer Science at Queen Mary University of London and an Honorary Research Fellow at Imperial College London

2023-05-25, 16:00 - 17:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 207

Abstract:

Efficient data-intensive systems translate data into value for decision making. As data is collected at unprecedented rates for timely analysis, the model-centric paradigm of machine learning (ML) is shifting towards a data-centric and system-centric paradigm. Recent breakthroughs in large ML models (e.g., GPT 4 and ChatGPT) and the remarkable outcomes of reinforcement learning (e.g., AlphaFold and AlphaCode) have shown that scalable data management and its optimizations are critical to obtain state-of-the-art performance. This talk aims to answer the question "how to co-design multiple layers of the software/system stack to improve scalability, performance, and energy efficiency of ML and data-intensive systems". It addresses the challenges to build fully automated data-intensive systems that integrate the ML layer, the data management layer, and the compilation-based optimization layer. Finally, this talk will sketch and explore the vision to leverage the computational advantage of quantum computing on hybrid classic/quantum systems in the post-Moore era.

Please contact office for zoom link information.

Speaker's bio:

Bo Zhao is a Lecturer (Assistant Professor) in Computer Science at Queen Mary University of London and an Honorary Research Fellow at Imperial College London. Bo’s research focuses on efficient data-intensive systems at the intersection of scalable reinforcement learning systems and distributed data management systems, as well as compilation-based optimization techniques. His long-term goal is to explore and understand the fundamental connections between data management and modern machine learning systems to make decision-making transparent, robust and efficient. Please find more details via http://www.eecs.qmul.ac.uk/~bozhao/.



A Generic Solution to Register-bounded Synthesis for Systems over Data words
Léo Exibard | Icelandic Centre of Excellence in Theoretical Computer Science at Reykjavik University,

2023-05-12, 13:30 - 14:45
Kaiserslautern building G26, room 111

Abstract:

In this talk, we consider synthesis of reactive systems interacting with environments using an infinite data domain. A popular formalism for specifying and modelling those systems is register automata and transducers. They extend finite-state automata by adding registers to store data values and to compare the incoming data values against stored ones. Synthesis from nondeterministic or universal register automata is undecidable in general. However, its register-bounded variant, where additionally a bound on the number of registers in a sought transducer is given, is known to be decidable for universal register automata which can compare data for equality, i.e., for data domain (N,=).

After briefly reviewing this result, we extend it to the domain (N,<) of natural numbers with linear order. Our solution is generic: we define a sufficient condition on data domains (regular approximability) for decidability of register-bounded synthesis. It allows one to use simple language-theoretic arguments and avoid technical game-theoretic reasoning. Further, by defining a generic notion of reducibility between data domains, we show the decidability of synthesis in the domain (N^d,<^d) of tuples of numbers equipped with the component-wise partial order and in the domain (Σ*, ≺) of finite strings with the prefix relation.

Speaker's bio:

-



Making monkeys and ducks behave with Crystal Lang
Beta Ziliani | Manas.Tech and FAMAF, Universidad Nacional de CórdobaArgentina

2023-05-04, 15:00 - 16:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

In the zoo of programming languages there are two cute yet rather misbehaved animals, typically found in the Dynamic Languages section: Duck Typing and Monkey Patching.

Duck Typing is hardly seen. You hear a "quack!", but you can’t easily tell if it’s coming form an actual duck, a parrot, or a recording. Monkey Patching, like the name suggests, patches any existing creature to change their behavior. It can even make a dog quack!

While these two animals bring lots of joy, they are also quite dangerous when used in the wild, as they can bring unexpected behavior to the rest of the creatures.

Crystal is a rarity among Static Languages in that it has Duck Typing and Monkey Patching. Given the strong —yet barely visible— fences of types, it manages to properly contain these beasts. In this talk I will present Crystal and provide a glimpse at how it manages to feel so dynamic.

Please contact office for zoom link information.

Speaker's bio:

Beta leads the development of the Crystal Programming Language and teaches about programming languages at Universidad Nacional de Córdoba in Argentina. With a recent past as a researcher in programming languages, he was notoriously the first student of Derek Dreyer to get a PhD at MPI-SWS. He has neither ducks nor monkeys, despite them being effective weapons against Córdoba's venomous scorpions.



2vyper: Contracts for Smart Contracts
Alexander J. Summers | University of British Columbia

2023-04-27, 16:00 - 17:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Smart contract languages are increasingly popular and numerous, and their programming models and challenges are somewhat unusual. The ubiquitous presence of untrusted external code in such a system makes classical contracts unsuitable for safety verification, while the intentional presence of (potentially-mutating) callbacks via unknown code makes standard static analysis techniques imprecise in general. On the other hand, smart contract languages such as Vyper (for Ethereum) tightly encapsulate direct access to the program's state. In this talk I'll present a methodology for expressing contracts for this language, in a way that supports sound verification of safety properties, with deductive verification tooling (converting Vyper to Viper) to automate the corresponding proofs.

Based on joint work with Christian Bräm, Marco Eilers, Peter Müller and Robin Sierra; see also the accompanying paper at OOPSLA 2021. --- Please contact Office for Zoom link information.

Speaker's bio:

Alex Summers is an Associate Professor of Computer Science at the University of British Columbia (UBC). Prior to moving to UBC in early 2020 he worked at ETH Zurich as a senior researcher in the Chair of Programming Methodology group run by Peter Müller. Alex works the general area of program correctness, including developing new specification and verification logics and type systems, and developing automated tools for constructing proofs about heap-based and concurrent programs, usually building on SMT solvers. He co-ordinated the Viper project for several years, and started the Prusti project providing user-facing verification for Rust. Alex is broadly interested in software verification for a wide variety of concurrent and imperative programming paradigms, and was awarded the 2015 Dahl-Nygaard Junior Prize for his work in these areas.



Automating cryptographic code generation
Yuval Yarom | Ruhr University Bochum

2023-04-24, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Cryptography provides the data protection mechanisms that underlie security and privacy in the modern connected world. Given this pivotal role, implementations of cryptographic code must not only be correct, but also meet stringent performance and security requirements. Achieving these aims is often difficult and requires significant investment in software development and manual tuning.

This talk presents two approaches for automating the task of generating correct, secure, and efficient cryptographic code. The first, Rosita, uses a power consumption emulator to detect unintended leaky interactions between values in the microarchitecture. It then rewrites the code to eliminate these interactions and produce code that is resistant to power analysis. The second, CryptOpt, uses evolutionary computation to search for the most efficient constant-time implementation of a cryptographic function. It then formally verifies that the produced implementation is semantically equivalent to the original code.

Rosita is a joint work with Lejla Batina, Łukasz Chmielewski, Francesco Regazzoni, Niels Samwel, Madura A. Shelton, and Markus Wagner.CryptOpt is a joint work with Adam Chlipala, Chitchanok Chuengsatiansup, Owen Conoly, Andres Erbsen, Daniel Genkin, Jason Gross, Joel Kuepper, Chuyue Sun, Samuel Tian, Markus Wagner, and David Wu.

Please contact office for zoom link information.

Speaker's bio:

Yuval Yarom is a Professor of Computer Science at Ruhr University Bochum (RUB). Before joining RUB, he was an Associate Professor at the School of Computer and Mathematical Sciences at the University of Adelaide. He earned a Ph.D. in Computer Science from the University of Adelaide in 2014. Earlier, he was the Vice President of Research in Memco Software and a co-founder and Chief Technology Officer of Girafa.com. Yuval is well-known as a co-discoverer of the Spectre family of microarchitectural side-channel attacks, and has won numerous awards for his research.



Quantum Pseudoentanglement
Adam Bouland | Stanford, Computer Science

2023-03-02, 17:00 - 18:00
Virtual talk

Abstract:

Abstract: Quantum pseudorandom states are efficiently constructable states which nevertheless masquerade as Haar-random states to poly-time observers. First defined by Ji, Liu and Song, such states have found a number of applications ranging from cryptography to the AdS/CFT correspondence. A fundamental question is exactly how much entanglement is required to create such states. Haar-random states, as well as t-designs for t≥2, exhibit near-maximal entanglement. Here we provide the first construction of pseudorandom states with only polylogarithmic entanglement entropy across an equipartition of the qubits, which is the minimum possible. Our construction can be based on any one-way function secure against quantum attack. We additionally show that the entanglement in our construction is fully "tunable", in the sense that one can have pseudorandom states with entanglement Θ(f(n)) for any desired function ω(logn)≤f(n)≤O(n). More fundamentally, our work calls into question to what extent entanglement is a "feelable" quantity of quantum systems. Inspired by recent work of Gheorghiu and Hoban, we define a new notion which we call "pseudoentanglement", which are ensembles of efficiently constructable quantum states which hide their entanglement entropy. We show such states exist in the strongest form possible while simultaneously being pseudorandom states. We also describe diverse applications of our result from entanglement distillation to property testing to quantum gravity.



Based on joint work with Bill Fefferman, Soumik Ghosh, Umesh Vazirani, and Zixin Zhou, arXiv:2211.00747

Speaker's bio:

https://theory.stanford.edu/~abouland/



Statistical inference with privacy and computational constraints
Maryam Aliakbarpour | Boston University and Northeastern University

2023-03-02, 09:30 - 10:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 002

Abstract:

The vast amount of digital data we create and collect has revolutionized many scientific fields and industrial sectors. Yet, despite our success in harnessing this transformative power of data, computational and societal trends emerging from the current practices of data science necessitate upgrading our toolkit for data analysis. In this talk, we discuss how practical considerations such as privacy and memory limits affect statistical inference tasks. In particular, we focus on two examples: First, we consider hypothesis testing with privacy constraints. More specifically, how one can design an algorithm that tests whether two data features are independent or correlated with a nearly-optimal number of data points while preserving the privacy of the individuals participating in the data set. Second, we study the problem of entropy estimation of a distribution by streaming over i.i.d. samples from it. We determine how bounded memory affects the number of samples we need to solve this problem. Please contact office for zoom link information

Speaker's bio:

Maryam Aliakbarpour is a postdoctoral researcher at Boston University and Northeastern University, where she is hosted by Prof. Adam Smith and Prof. Jonathan Ullman. Before that, she was a postdoctoral research associate at the University of Massachusetts Amherst, hosted by Prof. Andrew McGregor (from Fall 2020-Summer 2021). In Fall 2020, she was a visiting participant in the Probability, Geometry, and Computation in High Dimensions Program at the Simons Institute at Berkeley. Maryam received her Ph.D. in September 2020 from MIT, where she was advised by Prof. Ronitt Rubinfeld. Maryam was selected for the Rising Stars in EECS in 2018 and won the Neekeyfar Award from the Office of Graduate Education, MIT.



The Power of Feedback in a Cyber-Physical World
Dr. Anne-Kathrin Schmuck | Max Planck Institute for Software Systems

2023-02-28, 09:30 - 10:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Feedback allows systems to seamlessly and instantaneously adapt their behavior to their environment and is thereby the fundamental principle of life and technology -- it lets animals breathe, it stabilizes the climate, it allows airplanes to fly, and the energy grid to operate. During the last century, control technology excelled at using this power of feedback to engineer extremely stable, robust, and reliable technological systems.

With the ubiquity of computing devices in modern technological systems, feedback loops become cyber-physical -- the laws of physics governing technological, social or biological processes interact with (cyber) computing systems in a highly nontrivial manner, pushing towards higher and higher levels of autonomy and self-regulation. While reliability of these systems remains of utmost importance, a fundamental understanding of cyber-physical feedback loops for large-scale CPS is lacking far behind.

In this talk I will discuss how a control-inspired view on formal methods for reliable software design enables us to utilize the power of feedback for robust and reliable self-adaptation in cyber-physical system design.

Please contact office team for link information.

Speaker's bio:

Anne-Kathrin Schmuck is an independent research group leader at the Max Planck Institute for Software Systems (MPI-SWS) in Kaiserslautern, Germany. Her group is externally funded by the Emmy Noether Programme of the German Science Foundation (DFG). She received the Dipl.-Ing. (M.Sc) degree in engineering cybernetics from OvGU Magdeburg, Germany, in 2009 and the Dr.-Ing. (Ph.D.) degree in electrical engineering from TU Berlin, Germany, in 2015. Between 2015 and 2020 she was a postdoctoral researcher at MPI-SWS mentored by Rupak Majumdar. She currently serves as the co-chair of the IEEE CSS Technical Committee on Discrete Event Systems and as associate editor for the Springer Journal on Discrete Event Dynamical Systems, the IEEE Open Journal of Control Systems and the IFAC Journal on Nonlinear Analysis: Hybrid Systems. Her current research interests include cyber-physical system design, logical control software synthesis, reliability of automation systems and dynamical systems theory.



Fusing AI and Formal Methods for Automated Synthesis
Priyanka Golia | NUS, Singapore and IIT Kanpur

2023-02-23, 09:30 - 10:30
Kaiserslautern building G26, room 111

Abstract:

We entrust large parts of our daily lives to computer systems, which are becoming increasingly more complex. Developing scalable yet trustworthy techniques for designing and verifying such systems is an important problem. In this talk, our focus will be on automated synthesis, a technique that uses formal specifications to automatically generate systems (such as functions, programs, or circuits) that provably satisfy the requirements of the specification. I will introduce a state-of-the-art synthesis algorithm that leverages artificial intelligence to provide an initial guess for the system, and then uses formal methods to repair and verify the guess to synthesize probably correct system. I will conclude by exploring the potential for combining AI and formal methods to address real-world scenarios. Please contact the office team for link information.

Speaker's bio:

Priyanka Golia is a final year Ph.D. candidate at NUS, Singapore and IIT Kanpur. Her research interests lie at the intersection of formal methods and artificial intelligence. In particular, her dissertation work has focused on designing scalable automated synthesis and testing techniques. Her work has been awarded Best Paper Nomination at ICCAD-21 and Best Paper Candidate at DATE-23. She was named one of the EECS Rising Stars in 2022. She has co-presented a tutorial on Automated Synthesis: Towards the Holy Grail of AI at AAAI-22 and IJCAI-22, and She is co-authoring an upcoming book (on invitation from NOW publishers) on functional synthesis.



Learning for Decision Making: A Tale of Complex Human Preferences
Leqi Liu | Carnegie Mellon University

2023-02-14, 14:00 - 15:00
Virtual talk

Abstract:

Machine learning systems are deployed in diverse decision-making settings in service of stakeholders characterized by complex preferences. For example, in healthcare and finance, we ought to account for various levels of risk tolerance; and in personalized recommender systems, we face users whose preferences evolve dynamically over time. Building systems better aligned with stakeholder needs requires that we take the rich nature of human preferences into account. In this talk, I will give an overview of my research on the statistical and algorithmic foundations for building such human-centered machine learning systems. First, I will present a line of work that draws inspiration from the economics literature to develop learning algorithms that account for the risk preferences of stakeholders. Subsequently, I will discuss a line of work that draws insights from the psychology literature to develop online learning algorithms for personalized recommender systems that account for users’ evolving preferences. Please contact the office team for link information

Speaker's bio:

Leqi Liu is a Ph.D. candidate in the Machine Learning Department at Carnegie Mellon University, where she is advised by Zachary Lipton. Her research revolves around machine learning and behavioral sciences, with a focus on developing a theory for building learning systems that interact with people. She is a recipient of the Open Philanthropy AI Fellowship (2020-2023) and has interned at Apple and DeepMind during her Ph.D.



Toward Deep Semantic Understanding: Event-Centric Multimodal Knowledge Acquisition
Manling Li | University of Illinois Urbana Champaign

2023-02-01, 15:00 - 16:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Please note that this is a virtual talk which will be video casted to Saarbrücken and Kaiserslautern.

Traditionally, multimodal information consumption has been entity-centric with a focus on concrete concepts (such as objects, object types, physical relations, e.g., a person in a car), but lacks ability to understand abstract semantics (such as events and semantic roles of objects, e.g., driver, passenger, mechanic). However, such event-centric semantics are the core knowledge communicated, regardless whether in the form of text, images, videos, or other data modalities.

At the core of my research in Multimodal Information Extraction (IE) is to bring such deep semantic understanding ability to the multimodal world. My work opens up a new research direction Event-Centric Multimodal Knowledge Acquisition to transform traditional entity-centric single-modal knowledge into event-centric multi-modal knowledge. Such a transformation poses two significant challenges: (1) understanding multimodal semantic structures that are abstract (such as events and semantic roles of objects): I will present my solution of zero-shot cross-modal transfer (CLIP-Event), which is the first to model event semantic structures for vision-language pretraining, and supports zero-shot multimodal event extraction for the first time; (2) understanding long-horizon temporal dynamics: I will introduce Event Graph Model, which empowers machines to capture complex timelines, intertwined relations and multiple alternative outcomes. I will also show its positive results on long-standing open problems, such as timeline generation, meeting summarization, and question answering. Such Event-Centric Multimodal Knowledge starts the next generation of information access, which allows us to effectively access historical scenarios and reason about the future. I will lay out how I plan to grow a deep semantic understanding of language world and vision world, moving from concrete to abstract, from static to dynamic, and ultimately from perception to cognition. Please contact the Office team for Zoom link information.

Speaker's bio:

Manling Li is a Ph.D. candidate at the Computer Science Department of University of Illinois Urbana-Champaign. Her work on multimodal knowledge extraction won the ACL'20 Best Demo Paper Award, and the work on scientific information extraction from COVID literature won NAACL'21 Best Demo Paper Award. She was a recipient of Microsoft Research PhD Fellowship in 2021. She was selected as a DARPA Riser in 2022, and a EE CS Rising Star in 2022. She was awarded C.L. Dave and Jane W.S. Liu Award, and has been selected as a Mavis Future Faculty Fellow. She led 19 students to develop the UIUC information extraction system and ranked 1st in DARPA AIDA evaluation in 2019 and 2020. She has more than 30 publications on multimodal knowledge extraction and reasoning, and gave tutorials about event-centric multimodal knowledge at ACL'21, AAAI'21, NAACL'22, AAAI'23, etc. Additional information is available at https://limanling.github.io/.



Adaptive constant-depth circuits for manipulating non-abelian anyons
Robert Koenig | TU Muenchen

2023-01-19, 16:00 - 17:00
Virtual talk

Abstract:

We consider Kitaev's quantum double model based on a finite group G and describe quantum circuits for (a) preparation of the ground state, (b) creation of anyon pairs separated by an arbitrary distance, and (c) non-destructive topological charge measurement. We show that for any solvable group G all above tasks can be realized by constant-depth adaptive circuits with geometrically local unitary gates and mid-circuit measurements. Each gate may be chosen adaptively depending on previous measurement outcomes. Constant-depth circuits are well suited for implementation on a noisy hardware since it may be possible to execute the entire circuit within the qubit coherence time. Thus our results could facilitate an experimental study of exotic phases of matter with a non-abelian particle statistics. We also show that adaptiveness is essential for our circuit construction. Namely, task (b) cannot be realized by non-adaptive constant-depth local circuits for any non-abelian group G. This is in a sharp contrast with abelian anyons which can be created and moved over an arbitrary distance by a depth-1 circuit composed of generalized Pauli gates.

This is joint work with S. Bravyi, I. Kim and A. Kliesch, arXiv:2205.01933.

Speaker's bio:

-



Enforcing Stack Safety on a Capability Machine
Aïna Linn Georges | Aarhus University

2022-11-24, 10:00 - 11:00
Saarbrücken building E1 5, room 005 / simultaneous videocast to Kaiserslautern building G26, room 207 / Meeting ID: -

Abstract:

Memory safety is a major source of vulnerabilities in computer systems. Capability machines are a type of CPUs that support fine-grained privilege separation using capabilities, machine words that include forms of authority. Over the last decade, CHERI, a family of capability machines, has matured into an extensive design featuring, among other, a full UNIX-style operating system, CheriBSD. Building on ideas from CHERI, capability machines are even becoming a reality in industry; the Arm Morello program is a research program led by Arm to create a prototype system on chip with capabilities. One of the promises of capability machines is that they can enforce security properties that we expect from high-level languages, in particular stack safety, even when machine code is linked with other untrusted and possibly adversarial machine code. In this talk, I will discuss what it takes to realise that promise in practice. Since stack safety properties can be quite subtle, it is crucial to formally reason about the enforcement mechanisms enabled by capabilities. This is a complex task that involves reasoning about the interaction between known code, and unknown untrusted code. We use Iris to formally reason about the deep semantic properties of capability machines.

Speaker's bio:

-



Designing AI Systems with Steerable Long-Term Dynamics
Thorsten Joachims | Cornell University

2022-11-16, 10:00 - 11:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 002 / Meeting ID: 99457028566

Abstract:

The feedback that users provide through their choices (e.g. clicks, purchases) is one of the most common types of data readily available for training autonomous systems, and it is widely used in online platforms. However, naively training systems based on choice data may only improve short-term engagement, but not the long-term sustainability of the platform. In this talk, I will discuss some of the pitfalls of engagement-maximization, and explore methods that allow us to supplement engagement with additional criteria that are not limited to individual action-response metrics. The goal is to give platform operators a new set of macroscopic interventions for steering the dynamics of the platform, providing a new level of abstraction that goes beyond the engagement with individual recommendations or rankings.

Please contact the Office team for Zoom link information.

Speaker's bio:

Thorsten Joachims is a Professor in the Department of Computer Science and in the Department of Information Science at Cornell University, and he is an Amazon Scholar. His research interests center on a synthesis of theory and system building in machine learning, with applications in information access, language technology, and recommendation. His past research focused on counterfactual and causal inference, learning to rank, structured output prediction, support vector machines, text classification, learning with preferences, and learning from implicit feedback. He is an ACM Fellow, AAAI Fellow, KDD Innovations Award recipient, and member of the ACM SIGIR Academy.



AI-assisted Programming: Applications, User experiences, and Neuro-symbolic techniques
Sumit Gulwani | Microsoft Research

2022-11-07, 10:30 - 19:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 99457028566

Abstract:

AI can enhance programming experiences for a diverse set of programmers: from professional developers and data scientists (proficient programmers) who need help in software engineering and data wrangling, all the way to spreadsheet users (low-code programmers) who need help in authoring formulas, and students (novice programmers) who seek hints when stuck with their programming homework. To communicate their need to AI, users can express their intent explicitly—as input-output examples or natural-language specification—or implicitly—where they encounter a bug (and expect AI to suggest a fix), or simply allow AI to observe their last few lines of code or edits (to have it suggest the next steps).

The task of synthesizing an intended program snippet from the user’s intent is both a search and a ranking problem. Search is required to discover candidate programs that correspond to the (often ambiguous) intent, and ranking is required to pick the best program from multiple plausible alternatives. This creates a fertile playground for combining symbolic-reasoning techniques, which model the semantics of programming operators, and machine-learning techniques, which can model human preferences in programming. Recent advances in large language models like Codex offer further promise to advance such neuro-symbolic techniques.

Finally, a few critical requirements in AI-assisted programming are usability, precision, and trust; and they create opportunities for innovative user experiences and interactivity paradigms. In this talk, I will explain these concepts using some existing successes, including the Flash Fill feature in Excel, Data Connectors in PowerQuery, and IntelliCode/CoPilot in Visual Studio. I will also describe several new opportunities in AI-assisted programming, which can drive the next set of foundational neuro-symbolic advances.

This talk will be a hybrid event. You can join the meeting in E1 5 room 002 in Saarbrücken, in G26 room 111 in Kaiserlautern or via Zoom. Please contact the Office team for Zoom link information.

Speaker's bio:

Sumit Gulwani is a computer scientist connecting ideas, people, and research & practice. He invented the popular Flash Fill feature in Excel, which has now also found its place in middle-school computing textbooks. He leads the PROSE research and engineering team at Microsoft that develops APIs for program synthesis and has incorporated them into various Microsoft products including Visual Studio, Office, Notebooks, PowerQuery, PowerApps, PowerAutomate, Powershell, and SQL. He is a sponsor of storytelling trainings and initiatives within Microsoft. He has started a novel research fellowship program in India, a remote apprenticeship model to scale up impact while nurturing globally diverse talent and growing research leaders. He has co-authored 11 award-winning papers (including 3 test-of-time awards from ICSE and POPL) amongst 140+ research publications across multiple computer science areas and delivered 60+ keynotes/invited talks. He was awarded the Max Planck-Humboldt medal in 2021 and the ACM SIGPLAN Robin Milner Young Researcher Award in 2014 for his pioneering contributions to program synthesis and intelligent tutoring systems. He obtained his PhD in Computer Science from UC-Berkeley, and was awarded the ACM SIGPLAN Outstanding Doctoral Dissertation Award. He obtained his BTech in Computer Science and Engineering from IIT Kanpur, and was awarded the President’s Gold Medal.



Theoretical Reflections on Quantum Supremacy
Umesh Vazirani | University of California, Berkeley

2022-10-13, 16:00 - 17:00
Virtual talk

Abstract:

Theoretical Reflections on Quantum Supremacy

Google's 2019 experiment and their announcement of quantum supremacy relied on the inability of classical computers to efficiently carry out a task called random quantum circuit sampling (RCS). I will describe recent theoretical developments on the complexity of RCS. I will also describe a different line of work that provides scalable and rigorous proofs of quantumness based on an approach called the cryptographic leash, and the prospects of a concrete experimental challenge based on this approach.

Speaker's bio:

https://people.eecs.berkeley.edu/~vazirani/



Quantum Money
Peter Shor | MIT

2022-09-15, 16:00 - 17:00
Virtual talk

Abstract:

Quantum money is a cryptographic protocol where one party (the mint) can prepare quantum states (each with a unique serial number) that can be verified but not duplicated. We sketch our 2010 quantum money protocol based on knot invariants and planar embeddings of knots, and sketch other proposals for quantum money that have been made since then, including our recent failed protocol.

Speaker's bio:

Peter Williston Shor is a professor of applied mathematics at MIT. He is known for his work on quantum computation, in particular for devising Shor's algorithm, a quantum algorithm for factoring exponentially faster than the best currently-known algorithm running on a classical computer.



Hamiltonian simulation theory: from near-term quantum computing to quantum gravity
Tony Cubitt | University College London

2022-07-28, 16:00 - 16:45
Saarbrücken building E1 4, room 024

Abstract:

"Analogue" Hamiltonian simulation involves engineering a Hamiltonian of interest in the laboratory and studying its properties experimentally. Large-scale Hamiltonian simulation experiments have been carried out in optical lattices, ion traps and other systems for two decades. This is often touted as the most promising near-term application of quantum computing technology, as it is argued it does not require a scalable, fault-tolerant quantum computer.

Despite this, the theoretical basis for Hamiltonian simulation is surprisingly sparse. Even a precise definition of what it means to simulate a Hamiltonian was lacking. In my talk, I will explain how we put analogue Hamiltonian simulation on a rigorous theoretical footing, by drawing on techniques from Hamiltonian complexity theory in computer science, and Jordan and C* algebra results in mathematics.

I will then explain how this proved to be far more fruitful than a mere mathematical tidying-up exercise. It led to the discovery of universal quantum Hamiltonians [Science, 351:6 278, p.1180 (2016); Proc. Natl. Acad. Sci. 115:38 p.9497, (2018); J. Stat. Phys. 176:1 p228\u2013261 (2019); [[[01]https://link.springer.com/article/10.1007/s00023-021-01111-7][Annales Henri Poincar?, 23 p.223 (2021)], later shown to have a deep connection back to quantum complexity theory [PRX Quantum 3:010308 (2022)]. The theory has also found applications in developing new and more efficient fermionic encodings for quantum computing [Phys. Rev. B 104:035118 (2021)], leading to dramatic reductions in the resource requirements for Hamiltonian simulation on near-term quantum computers [Nature Commun. 12:1, 4929 (2021)]. It has even found applications in quantum gravity, leading to the first toy models of AdS/CFT to encompass energy scales, dynamics, and (toy models of) black hole formation [J. High Energy Phys. 2019:17 (2019); J. High Energy Phys. 2022:52 (2022)].

Speaker's bio:

-



Consecutive integers divisible by a power of their largest prime factor
Jean-Marie De Koninck | Université Laval

2022-07-11, 14:00 - 15:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Given an integer n ≥ 2, let P (n) stand for its largest prime factor. Given integers k ≥ 2 and ℓ ≥ 2, consider the set Ek,ℓ of those integers n ≥ 2 for which P (n + i)ℓ | n + i for i = 0, 1, . . . , k. Each of these sets is very small. For instance, the smallest element of E3,2 is 1 294 298, the smallest known element of E3,3 has 77 digits and no elements of E4,2 are known, even though all these sets are believed to be infinite. In this talk, using elementary, analytic and probabilistic approaches, we will shed some light on these sets and raise several open problems. This is joint work with Nicolas Doyon, Florian Luca and Matthieu Moineau.

---

This talk will be a hybrid event. You can join the meeting in E1 5 room 002 in Saarbrücken, in G26 room 111 in Kaiserlautern or via Zoom. Please contact the Office team for Zoom link information.

Speaker's bio:

Jean-Marie De Koninck has been a researcher and professor of mathematics at Université Laval (Québec) for more than forty years and is well known to the scientific community for his work in analytic number theory. He is the author of 18 books and 163 peer reviewed articles in scientific journals. He is now Professor Emeritus. Professor De Koninck has also hosted his own science outreach television show "C'est mathématique!", broadcasted on the French-Canadian channel (Canal Z) and later on TFO (Télévision française de l'Ontario). In 2005, he created the Sciences and Mathematics in Action (SMAC) program whose purpose is to excite kids about science and mathematics. He is well known by the general public as the founder of Operation Red Nose, a road safety operation involving over 55,000 volunteers across Canada. He was also very active in the media during the ten years he acted as President of the Table québécoise de la sécurité routière. He is now a member of the Board for the Société de l'assurance automobile du Québec. Many have also seen him as a color-commentator for nationally televised swim events.



The Skolem Landscape
Joël Ouaknine | Max Planck Institute for Software Systems

2022-05-27, 14:00 - 15:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

The Skolem Problem asks how to determine algorithmically whether a given linear recurrence sequence (such as the Fibonacci numbers) has a zero. It is a central question in dynamical systems and number theory, and has many connections to other branches of mathematics and computer science. Unfortunately, its decidability has been open for nearly a century! In this talk, I will present a brief survey of what is known on the Skolem Problem and related questions, including recent and ongoing developments.

---

This talk will be a hybrid event. You can join the meeting in E1 5 room 002 in Saarbrücken, in G26 room 111 in Kaiserlautern or via Zoom. Please contact the Office team for Zoom link information.

Speaker's bio:

-



Orderrr! A tale of money, intrigue, and specifications
Lorenzo Alvisi | Cornell University

2022-05-24, 09:30 - 10:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to building , room / Meeting ID: Zoom

Abstract:

Mistrust over traditional financial institutions is motivating the development of decentralized financial infrastructures based on blockchains. In particular, Consortium blockchains (such as the Linux Foundation Hyperledger and Facebook’s diem) are emerging as the approach preferred by businesses. These systems allow only a well-known set of mutually distrustful parties to add blocks to the blockchain; in this way, they aim to retain the benefits of decentralization without embracing the cyberpunk philosophy that informed Nakamoto’s disruptive vision. At the core of consortium blockchains is State Machine Replication, a classic technique borrowed from fault tolerant distributed computing; to ensure the robustness of their infrastructure, consortium blockchains actually borrow the Byzantine-tolerant version of this technique, which guarantees that the blockchain will operate correctly even if as many as about a third of the contributing parties are bent on cheating.

But, sometimes, "a borrowing is a sorrowing".

I will discuss why Byzantine-tolerant state machine replication is fundamentally incapable of recognizing, never mind preventing, an ever present scourge of financial exchanges: the fraudulent manipulation of the order in which transactions are processed — and how its specification needs to be expanded to give it a fighting chance.

But is it possible to completely eliminate the ability of Byzantine parties to engage in order manipulation? What meaningful ordering guarantees can be enforced? And at what cost?

---

This talk will be a hybrid event. You can either attend in room 002 or via Zoom. Please contact the office team for zoom link information.

Speaker's bio:

Lorenzo Alvisi is the Tisch University Professor of Computer Science at Cornell University. Prior to joining Cornell, he held an endowed professorship at UT Austin, where he is now a Distinguished Professor Emeritus. Lorenzo received his Ph.D. in 1996 from Cornell, after earning a Laurea cum Laude in Physics from the University of Bologna. His research interests span theory and practice of distributed computing, with a focus on scaling strong consistency and dependability guarantees. He is a Fellow of the ACM and IEEE , an Alfred P. Sloan Foundation Fellow, and the recipient of a Humboldt Research Award, an NSF Career Award, and several teaching awards. He serves on the editorial boards of ACM TOCS and Springer’s Distributed Computing, and on the steering committee of Eurosys and SOSP . Besides distributed computing, he is passionate about classical music and red Italian motorcycles.



Quantum algorithms for search and optimization
Andris Ambainis | University of Latvia

2022-05-19, 16:00 - 17:00
Virtual talk / simultaneous videocast to building , room / Meeting ID: 945 7732 1297

Abstract:

Quantum algorithms are useful for a variety of problems in search and > optimization. This line of work started with Grover's quantum search algorithm > which achieved a quadratic speedup over naive exhaustive search but has now > developed far beyond it. > > In this talk, we describe three recent results in this area: > > 1. We show that, for any classical algorithm that uses a random walk to find an > object with some property (by walking until the random walker reaches such an > object), there is an almost quadratically faster quantum algorithm > (https://arxiv.org/abs/1903.07493 ). > > 2. We show that the best known exponential time algorithms for solving several > NP-complete problems (such as Travelling Salesman Problem or TSP) can be > improved quantumly  (https://arxiv.org/abs/1807.05209 > ). For example, for the > TSP, the best known classical algorithm needs time O(2^n) but our quantum > algorithm solves the problem in time O(1.728...^n). > > 3. We show a almost quadratic quantum speedup for a number of geometric problems > such as finding three points that are on the same line > (https://arxiv.org/abs/2004.08949 ).

Speaker's bio:

-



Saving Stochastic Bandits from Poisoning Attacks via Limited Data Verification
Long Tran-Thanh | University of Warwick

2022-04-07, 11:00 - 12:00
Virtual talk

Abstract:

In this paper, we study how bandit algorithms in a bounded reward setting can be contaminated and fooled to learn wrong actions. In particular, we consider a strong attacker model in which the attacker aims to fool the learning algorithm to learn a suboptimal bandit policy. To do so, this attacker can observe both the selected actions and their corresponding rewards, and can contaminate the rewards with additive noise. We show that any bandit algorithm with regret O(\log T) can be forced to suffer a linear regret with an expected amount of contamination O(\log T). We show that this amount of contamination is also necessary, as we prove that there exists a no-regret bandit algorithm, specifically the classical UCB, that requires \Omega(\log T) amount of contamination to suffer linear regret. To combat such poisoning attacks, our second main contribution is to propose verification-based mechanisms, which use a verification scheme to access a limited number of uncontaminated rewards. In particular, for the case of unlimited verifications, we show that with O(\log T) expected number of verifications, a simple modified version of the Explore-then-Commit type bandit algorithm can restore the order optimal O(\log T) regret irrespective of the amount of contamination used by the attacker. We also provide a UCB-like verification scheme, called Secure-UCB, that also enjoys full recovery from any attacks, also with O(\log T) expected number of verifications. To derive a matching lower bound on the number of verifications, we also prove that for any order-optimal bandit algorithm, this number of verifications O(\log T) is necessary to recover the order-optimal regret. On the other hand, when the number of verifications is bounded above by a budget B, we propose a novel algorithm, Secure-BARBAR, which provably achieves O(\min\{C,T/sqrt{B} \}) regret with high probability against weak attackers (i.e., attackers who have to place the contamination \emph{before} seeing the actual pulls of the bandit algorithm), where C is the total amount of contamination by the attacker. This new result breaks the known \Omega(C) lower bound of the non-verified setting if C is large.  

This is joint work with Anshuka Rangi and Haifeng Xu.

Please contact crichter@mpi-sws.org for link information

Speaker's bio:

Long Tran-Thanh is an Associate Professor in Artificial Intelligence at the University of Warwick. Long has been doing active research in a number of key areas of Artificial Intelligence and multi-agent systems, mainly focusing on multi-armed bandits, game theory, and incentive engineering, and their applications to crowdsourcing, human-agent learning, and AI for Social Good. He has published more than 80 papers at top AI/ML conferences (including AAAI, AAMAS, CVPR, ECAI, IJCAI, NeurIPS, UAI) and journals (JAAMAS, AIJ), and have received a number of prestigious national/international awards, including 2 best paper honourable mention awards at top-tier AI conferences (AAAI, ECAI), 2 Best PhD Thesis Awards (in the UK and in Europe), and the AIJ Prominent Paper Award, for being the author of one of the most influential papers published at the flagship journal in AI. Long currently serves as a board member (2018-2024) of the IFAAMAS Directory Board, the main international governing body of the International Federation for Autonomous Agents and Multiagent Systems, a major sub-field of the AI community. He is also the local chair of the AAMAS 2021 and AAMAS 2023 conference, the flagship international conference of the multi-agent systems community.



Machine learning for algorithm design
Maria Florina Balcan | Carnegie Mellon University

2022-03-29, 15:00 - 16:00
Virtual talk

Abstract:

The classic textbook approach to designing and analyzing algorithms for combinatorialproblems considers worst-case instances of the problem, about which the algorithm designer has no prior information. Since for many problems such worst-case guarantees are quite weak, practitioners often employ a data-driven algorithm design approach; specifically,they use machine learning and instances of the problem from their specific domain to learn a method that works well in that domain. Historically, such data-driven algorithmic techniques have come with no performance guarantees. In this talk, I will describeour recent work on providing performance guarantees for data-driven algorithm design both in the distributional and online learning formalizations.

--

Please contact the office team for zoom link information.

Speaker's bio:

Maria Florina Balcan is the Cadence Design Systems Professor of Computer Science in the School of Computer Science at Carnegie Mellon University. Her main research interests are machine learning, artificial intelligence, theory of computing, and algorithmic game theory. She is a Simons Investigator, a Sloan Fellow, a Microsoft Research New Faculty Fellow, and  recipient of the ACM Grace Murray Hopper Award, an NSF CAREER award, and several best paper awards.   She has co-chaired major conferences in the field: the Conference on Learning Theory (COLT) 2014, the International Conference on Machine Learning (ICML) 2016, and Neural Information Processing Systems (NeurIPS) 2020. She has also  been  the general chair for the International Conference on Machine Learning (ICML) 2021, a board member of the International Machine Learning Society, and a co-organizer for the Simons semester on Foundations of Machine Learning.



Confidential and Private Collaborative Machine Learning
Adam Dziedzic | The Vector Institute & University of Toronto

2022-03-04, 14:00 - 15:00
Virtual talk

Abstract:

This talk outlines my work building systems that enable applications to securely interact with users' data while preserving individuals' privacy. First, I'll talk bout how we can bring the power of secure computation to difficult settings: TimeCrypt is an encrypted time-series database design that meets the scalability and low-latency requirements associated with time-series workloads. Then, I'll discuss work on using end-to-end privacy as a strong foundation for data protection: Zeph is a new end-to-end privacy system that provides the means to extract value from encrypted streaming data safely while ensuring data confidentiality and privacy by serving only privacy-compliant views of the data. Throughout the talk, I will discuss the prevalent challenges of efficiency, functionality, and accessibility in this research area; my approach to addressing these challenges; and future directions that will help bring end-to-end privacy to an even wider range of applications.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Adam Dziedzic is a Postdoctoral Fellow at the Vector Institute and the University of Toronto, advised by Prof. Nicolas Papernot, where he is working on trustworthy ML. Adam finished his Ph.D. at the University of Chicago, advised by Prof. Sanjay Krishnan, where he worked on input and model compression for adaptive and robust neural networks. He obtained his Bachelor's and Master's degrees from Warsaw University of Technology. Adam was also studying at DTU and EPFL. He worked at CERN, Barclays Investment Bank, Microsoft Research, and Google.



Systems Designs for End-to-End Privacy
Anwar Hithnawi | ETH Zurich

2022-03-03, 10:00 - 11:00
Virtual talk

Abstract:

The potential of data to transform science and society has spurred unparalleled efforts to collect it in increasingly sensitive and granular forms. This accumulation of sensitive data did not materialize without issues and has raised severe societal concerns. These concerns appear amply justified by numerous reports of data breaches and misuse. Today, we are at an inflection point: if we want to continue enjoying the benefits of data-driven services, we need to place privacy at the center of our data ecosystems.

This talk outlines my work building systems that enable applications to securely interact with users' data while preserving individuals' privacy. First, I'll talk bout how we can bring the power of secure computation to difficult settings: TimeCrypt is an encrypted time-series database design that meets the scalability and low-latency requirements associated with time-series workloads. Then, I'll discuss work on using end-to-end privacy as a strong foundation for data protection: Zeph is a new end-to-end privacy system that provides the means to extract value from encrypted streaming data safely while ensuring data confidentiality and privacy by serving only privacy-compliant views of the data. Throughout the talk, I will discuss the prevalent challenges of efficiency, functionality, and accessibility in this research area; my approach to addressing these challenges; and future directions that will help bring end-to-end privacy to an even wider range of applications.

---

Please contact the MPI-SWS Office Team for the ZOOM link information.

Speaker's bio:

Anwar Hithnawi is an Ambizione research fellow at ETH Zurich where she leads the Privacy-Preserving Systems Lab (pps-lab.com). She works at the intersection of systems, data privacy, and applied cryptography. Anwar received her doctoral degree in computer science from ETH Zurich. Prior to joining ETH Zurich as a research fellow in 2020, she was a postdoctoral researcher at UC Berkeley. She is the recipient of an SNSF Ambizione grant, the Facebook Research Award, an SNSF Postdoctoral Fellowship, and the Google Anita Borg Memorial Scholarship.



Empowering People to Have Secure and Private Interactions with Digital Technologies
Pardis Emami-Naeini | University of Washington

2022-03-02, 17:00 - 18:00
Virtual talk

Abstract:

Digital technologies are evolving with advanced capabilities. To function, these technologies rely on collecting and processing various types of sensitive data from their users. These data practices could expose users to a wide array of security and privacy risks. My research at the intersection of security, privacy, and human-computer interaction aims to help all people have safer interactions with digital technologies. In this talk, I will share results on people’s security and privacy preferences and attitudes toward technologies such as smart devices and remote communication tools. I will then describe a security and privacy transparency tool that I designed and evaluated to address consumers’ needs when purchasing and interacting with smart devices. I will end my talk by discussing emerging and future directions for my research to design equitable security and privacy tools and policies by studying and designing for the needs of diverse populations.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Pardis Emami-Naeini is a postdoctoral researcher in the Security and Privacy Research Lab at the University of Washington. Her research is broadly at the intersection of security and privacy, usability, and human-computer interaction. Her work has been published at flagship venues in security (IEEE S&P, SOUPS) and human-computer interaction and social sciences (CHI, CSCW) and covered by multiple outlets, including Wired and the Wall Street Journal. Her research has informed the National Institute of Standards and Technology (NIST), Consumer Reports, and World Economic Forum in their efforts toward designing usable and informative security and privacy labels for smart devices. Pardis received her B.Sc. degree in computer engineering from Sharif University of Technology in 2015 and her M.Sc. and Ph.D. degrees in computer science from Carnegie Mellon University in 2018 and 2020, respectively. She was selected as a Rising Star in electrical engineering and computer science in October 2019 and was awarded the 2019-2020 CMU CyLab Presidential Fellowship.



Knowledge is Power: Symbolic Knowledge Distillation, Commonsense Morality, and Multimodal Script Knowledge
Yejin Choi | University of Washington, Seattle, and Allen Institute for AI

2022-03-01, 18:00 - 19:00
Virtual talk

Abstract:

Scale appears to be the winning recipe in today's AI leaderboards. And yet, extreme-scale neural models are still brittle to make errors that are often nonsensical and even counterintuitive. In this talk, I will argue for the importance of knowledge, especially commonsense knowledge, and demonstrate how smaller models developed in academia can still have an edge over larger industry-scale models, if powered with knowledge.

First, I will introduce "symbolic knowledge distillation", a new framework to distill larger neural language models into smaller commonsense models, which leads to a machine-authored KB that wins, for the first time, over a human-authored KB in all criteria: scale, accuracy, and diversity. Next, I will present an experimental conceptual framework toward computational social norms and commonsense morality, so that neural language models can learn to reason that "helping a friend" is generally a good thing to do, but "helping a friend spread fake news" is not. Finally, I will discuss an approach to multimodal script knowledge demonstrating the power of complex raw data, which leads to new SOTA performances on a dozen leaderboards that require grounded, temporal, and causal commonsense reasoning.

Speaker's bio:

Yejin Choi is Brett Helsel Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and a senior research manager at AI2 overseeing the project Mosaic. Her research investigates a wide range of problems including commonsense knowledge and reasoning, neuro-symbolic integration, multimodal representation learning, and AI for social good. She is a co-recipient of the ACL Test of Time award in 2021, the CVPR Longuet-Higgins Prize in 2021, a NeurIPS Outstanding Paper Award in 2021, the AAAI Outstanding Paper Award in 2020, the Borg Early Career Award in 2018, the inaugural Alexa Prize Challenge in 2017, IEEE AI's 10 to Watch in 2016, and the ICCV Marr Prize in 2013. https://homes.cs.washington.edu/~yejin/



Improving People’s Adoption of Security and Privacy Behaviors
Yixin Zou | University of Michigan

2022-03-01, 15:00 - 16:00
Virtual talk

Abstract:

Experts recommend a plethora of advice for staying safe online, yet people still use weak passwords, fall for scams, or ignore software updates. Such inconsistent adoption of protective behaviors is understandable given the need to navigate other priorities and constraints in everyday life. Yet when the actions taken are insufficient to mitigate potential risks, it leaves people – especially those already marginalized – vulnerable to dire consequences from financial loss to abuse and harassment.

In this talk, I share findings from my research on hurdles that prevent people from adopting secure behaviors and solutions that encourage adoption in three domains: designing data breach notifications, informing privacy interface guidelines in regulations, and supporting survivors of tech-enabled abuse. (1) Even small changes in system design can make a big difference. I empirically show consumers’ low awareness of data breaches, rational justifications and biases behind inaction, and how to motivate consumers to change breached passwords through nudges in breach notifications. (2) Public policy is essential in incentivizing companies to implement better data practices, but policymaking needs to be informed by evidence from research. I present a series of user studies that led to a user-tested icon for conveying the "do not sell my personal information" opt-out, now part of the California Consumer Privacy Act (CCPA). (3) Different user groups have different threat models and safety needs, requiring special considerations in developing and deploying interventions. Drawing on findings from focus groups, I discuss how computer security support agents can help survivors of tech-enabled abuse using a trauma-informed approach. Altogether, I highlight the impact of my research on technology design, public policy, and educational efforts. I end the talk by discussing how my interdisciplinary, human-centered approach in solving security and privacy challenges can apply to future work such as improving expert advice and developing trauma-informed computing systems.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Yixin Zou (she/her) is a Ph.D. Candidate at the University of Michigan School of Information. Her research interests span cybersecurity, privacy, and human-computer interaction, with an emphasis on improving people’s adoption of protective behaviors and supporting vulnerable populations (e.g., survivors of intimate partner violence and older adults) in protecting their digital safety. Her research has received a Best Paper Award at the Symposium on Usable Privacy and Security (SOUPS) and two Honorable Mentions at the ACM Conference on Human Factors in Computing Systems (CHI). She has been an invited speaker at the US Federal Trade Commission's PrivacyCon, and she co-led the research effort that produced the opt-out icon in the California Consumer Privacy Act (CCPA). She has also collaborated with industry partners at NortonLifeLock and Mozilla, and her research at Mozilla has directly influenced the product development of Firefox Monitor. Before joining the University of Michigan, she received a Bachelor’s degree in Advertising from the University of Illinois at Urbana-Champaign.



Specification-Guided Policy-Synthesis
Suguman Bansal | University of Pennsylvania

2022-02-28, 15:00 - 16:00
Virtual talk

Abstract:

Policy synthesis or algorithms to design policies for computational systems is one of the fundamental problems in computer science. Standing on the shoulders of simplified yet concise task-specification using high-level logical specification languages, this talk will cover synthesis algorithms using two contrasting approaches. First, the classical logic-based approach of reactive synthesis; Second, the modern learning-based approach of reinforcement learning. This talk will cover our scalable and efficient state-of-the-art algorithms for synthesis from high-level specifications using both these approaches, and investigate whether formal guarantees are possible. We will conclude with a forward-looking view of these contributions to trustworthy AI.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Suguman Bansal is an NSF/CRA Computing Innovation Postdoctoral Fellow at the University of Pennsylvania, mentored by Prof. Rajeev Alur. Her primary area of research is Formal Methods and Programming Langauge, and her secondary area of research is Artificial Intelligence. She is the recipient of the 2020 NSF CI Fellowship and has been named a 2021 MIT EECS Rising Star. Her research has appeared at venues in formal methods and programming languages (CAV, POPL, TACAS), and artificial intelligence and machine learning (AAAI, NeurIPS). She completed her Ph.D. in 2020, advised by Prof. Moshe Y. Vardi, from Rice University. She received B.S. with Honors in 2014 from Chennai Mathematical Institute.



Grounding Language by Seeing, Hearing, and Interacting
Rowan Zellers | University of Washington

2022-02-23, 16:00 - 17:00
Virtual talk

Abstract:

As humans, our understanding of language is grounded in a rich mental model about "how the world works" – that we learn through perception and interaction. We use this understanding to reason beyond what is literally said, imagining how situations might unfold in the world. Machines today struggle at making such connections, which limits how they can be safely used. In my talk, I will discuss three lines of work to bridge this gap between machines and humans. I will first discuss how we might measure grounded understanding. I will introduce a suite of approaches for constructing benchmarks, using machines in the loop to filter out spurious biases. Next, I will introduce PIGLeT: a model that learns physical commonsense understanding by interacting with the world through simulation, using this knowledge to ground language. PIGLeT learns linguistic form and meaning – together – and outperforms text-to-text only models that are orders of magnitude larger. Finally, I will introduce MERLOT, which learns about situations in the world by watching millions of YouTube videos with transcribed speech. The model learns to jointly represent video, audio, and language, together and over time – learning multimodal and neural script knowledge representations. Together, these directions suggest a path forward for building machines that learn language rooted in the world.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Rowan Zellers is a final year PhD candidate at the University of Washington in Computer Science & Engineering, advised by Yejin Choi and Ali Farhadi. His research focuses on enabling machines to understand language, vision, sound, and the world beyond these modalities. He has been recognized through NSF and ARCS Graduate Fellowships, and a NeurIPS 2021 outstanding paper award. His work has appeared in several media outlets, including Wired, the Washington Post, and the New York Times. In the past, he graduated from Harvey Mudd College with a B.S. in Computer Science & Mathematics, and has interned at the Allen Institute for AI.



Measurement and Experimentation in Complex Sociopolitical Processes
Aaron Schein | Columbia University

2022-02-22, 15:00 - 16:00
Virtual talk

Abstract:

Complex social and political processes at many scales—from interpersonal networks of friends to international networks of countries—are a central theme of computational social science. Modern methods of data science that can contend with the complexity of data from such processes have the potential to break ground on long-standing questions of critical relevance to public policy. In this talk, I will present two lines of work on 1) estimating the causal effects of friend-to-friend mobilization in US elections, and 2) inferring complex latent structure in dyadic event data of country-to-country interactions. In the first part, I will discuss recent work using large-scale digital field experiments on the mobile app Outvote to estimate the causal effects of friend-to-friend texting on voter turnout in the 2018 and 2020 US elections. This work is among the first to rigorously assess the effectiveness of friend-to-friend "get out the vote" tactics, which political campaigns have increasingly embraced in recent elections. I will discuss the statistical challenges inherent to randomizing interactions between friends with a "light touch" design and will describe the methodology we developed to identify and precisely estimate causal effects. In the second part of this talk, I will discuss hierarchical Bayesian modeling of dyadic event data sets in international relations which contain millions of micro-records of the form "country i took action a to country j at time t". The models I will discuss blend elements of tensor decomposition and dynamical systems to capture complex temporal and network dependence structure in the data. Approximate posterior inference relies on new auxiliary variable augmentation schemes and theorems about the properties and relationships between different discrete distributions. At the end of the talk, I will outline the future of both lines of work, as well as their intersection, and sketch a broader vision for how data science can serve computational social science and vice versa.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Aaron is a postdoctoral fellow in the Data Science Institute at Columbia University, where he is co-advised by David Blei and Donald Green. His research develops machine learning and data science methods for computational social science. His recent work uses large-scale digital field experiments to assess the causal effects of friend-to-friend mobilization on voter turnout in US elections. Aaron did his PhD at UMass Amherst in Computer Science, where he was advised by Hanna Wallach. His doctoral research developed new hierarchical Bayesian models, tensor decomposition methods, and dynamical systems for analyzing massive data sets of country-to-country events. Aaron has interned at Google and Microsoft Research and worked in policy at the MITRE Corporation. He is also currently a senior technical advisor at Ocurate and a research affiliate at PredictWise. Prior to doing his PhD, he received a BA in Political Science and an MA in Computational Linguistics from UMass Amherst. He is on Twitter @AaronSchein.



Language theory into practice, a play in three acts
Ningning Xie | University of Cambridge

2022-02-21, 15:00 - 16:00
Virtual talk

Abstract:

Computer development has come a long way. Along with the evolution of computers, advances in high-level programming languages allow us to write large-scale software systems easily. While new language features significantly extend the language expressive power, they often lack theoretical development and lead to subtle implementation bugs. Moreover, while high-level languages abstract over low-level aspects and thus eliminate many sources of errors, the abstraction often comes with a runtime penalty that results in inefficient low-level code. In this talk, I will show how to apply programming language theory to practical programming to offer strong static safety and efficiency guarantees in three domains: language design, runtime systems, and machine learning systems. First, I will demonstrate a type-theoretical formalization of language features, focusing on type inference for dependent types in algebraic datatype declarations. The formalization has guided real-world language implementations. Then, I will show that programming language theory reaps benefits beyond safety. I will present Perceus, a garbage-free reference counting algorithm with reuse, that supports high-level programming while preserving low-level efficiency. Perceus delivers competitive performance compared to state-of-the-art memory reclamation implementations. Finally, as part of a vision to make programming languages broadly applicable, I will discuss my efforts to apply programming language techniques to machine learning systems, by presenting a program synthesis framework that accelerates large-scale distributed machine learning on hardware platforms.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Ningning Xie is a research associate at the University of Cambridge. She received her Ph.D. in Computer Science at the University of Hong Kong in 2021. Her research interests are in the field of programming languages, where she applies programming language theory to a variety of domains, including language design, runtime and compiler systems, and machine learning systems. In the last two years of her Ph.D. study, Ningning had research visits at Microsoft Research Redmond and DeepMind London. Her research has been recognized by ACM SIGPLAN Distinguished Paper awards at the Symposium on Principles of Programming Languages (POPL 2020) and the Conference on Programming Language Design and Implementation (PLDI 2021).



Improving Robustness in Machine Learning Models
Yao Qin | Google Research

2022-02-18, 16:00 - 17:00
Virtual talk

Abstract:

There are many robustness issues arising in a variety of forms while deploying ML systems in the real world. For example, neural networks suffer from distributional shift — a model is tested on a data distribution different from what it was trained on. In addition, neural networks are vulnerable to adversarial examples – small perturbations to the input can successfully fool classifiers into making incorrect predictions. In this talk, I will introduce how to improve robustness of machine learning models by building connections between different perspectives of robustness issues and bridging gaps between a wide range of modalities. As a result, seemingly different robustness issues can be tackled by closely-related approaches, and robust ML on multiple modalities and backbone architectures can converge to a common ground.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Yao Qin is a Research Scientist at Google Research. She received a PhD degree in Computer Science and Engineering at UC San Diego under the supervision of Prof. Garrison Cottrell in 2020. Her research focuses on improving robustness of machine learning. Due to her contributions to the robustness of ML models (10 first-author publications and over 1500 citations), She has been selected as EECS Rising Star at MIT, 2021. Yao interned at Brain Toronto Team advised by Geoffrey Hinton in 2019, and Google Brain, advised by Ian Goodfellow in 2018. Homepage: http://cseweb.ucsd.edu/~yaq007/



Quantum Networks: From a Physics Experiment to a Quantum Network System
Stephanie Wehner | TU Delft

2022-02-17, 16:00 - 17:00
Saarbrücken building E1 4, room 325 / simultaneous videocast to building , room / Meeting ID: 945 7732 1297

Abstract:

The internet has had a revolutionary impact on our world. The vision of a quantum internet is to provide fundamentally new internet technology by enabling quantum communication between any two points on Earth. Such a quantum internet can —in synergy with the "classical" internet that we have today—connect quantum information processors in order to achieve unparalleled capabilities that are provably impossible by using only classical information. At present, such technology is under development in physics labs around the globe, but no large-scale quantum network systems exist. We start by providing a gentle introduction to quantum networks for computer scientists, and briefly review the state of the art. We highlight some of the many open questions to computer science in the domain of quantum networking, illustrated with a very recent result realizing the first quantum link layer protocol on a programmable 3 node quantum network based on Nitrogen-Vacancy Centers in Diamond. We close by providing a series of pointers to learn more, as well as tools to download that allow play with simulated quantum networks without leaving your home.

Information about the series and recordings of previous talks can be found at https://www.mpi-inf.mpg.de/departments/algorithms-complexity/quantum-lecture-ser ies

Speaker's bio:

-



Characterizing and Mitigating Threats to Trust and Safety Online
Yiqing Hua | Cornell University

2022-02-15, 10:00 - 11:00
Virtual talk

Abstract:

Supporting a safe and trustworthy online environment is challenging, as these environments are constantly threatened by abusive behaviors that cause real human harm. Among the numerous threats, online harassment suppresses voices, and misleading information and propaganda undermine public trust. Existing methods to combat these threats are often not sufficient, as adversaries may abuse and exploit technologies in nuanced ways, and mitigation strategies don't always reflect users' needs. In this talk, I will present my work on characterizing threats, and empowering users with new techniques to combat these threats. First, I will discuss the challenges faced in detecting subtle harassment on social media, and my approach to context-specific analysis using the United States 2018 general election as a case study. Second, I will demonstrate the importance of characterizing user participation in adversarial activities, to inform better moderation mechanism design. Lastly, I will introduce my work on developing privacy-preserving abuse mitigation techniques, to allow user-side warnings of misinformation images in end-to-end encrypted environments.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Yiqing Hua is a PhD candidate in Computer Science at Cornell Tech, Cornell University. Her research lies in the intersection of social computing and security and privacy. Her work focuses on characterizing threats to online trust and safety, and enabling abuse mitigation in privacy-sensitive environments. She was a recipient of the Digital Life Initiative Fellowship, and was named as EECS Rising Star in 2020. Her work received an Honorable Mention award in CHI 2020.



How do neurons learn?
Hadi Daneshmand | Princeton University

2022-02-11, 12:00 - 13:00
Virtual talk

Abstract:

Representation learning with neural networks automates feature extraction with less need for continental feature engineering; thereby achieving incredible performance in image, text, and strategy processing. However, the underlying mechanism of representation learning is not well understood. This limits applications of representation learning in critical tasks, such as cancer diagnoses and other medical decisions. In this talk, we propose a research plan for studying representation learning with three core research focuses: Random neural networks. By studying random neural networks, we shed light on the inner workings of the incredible performance of modern neural networks. We demonstrate how the study of random networks allows us to go beyond the conventional trial and error development of neural networks. Local optimality. Given a neural network, is it possible to improve its performance only by slight modifications of the network parameters? This is the focus of local optimization for representation learning. Our research highlights that local optimization requires more studies in modern representation learning with generative adversarial networks. Modeling. A mathematical study of learning dynamics is very challenging. Modeling facilitates the study of learning dynamics by omitting technical details of learning. For example, a continuous-time dynamical system may model an iterative learning method, bridging the gap between dynamical systems and representation learning.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Hadi is a postdoc at Princeton University. He previously worked at INRIA Paris as a postdoc researcher under the supervision of Professor Francis Bach. Hadi completed his Ph.D. in computer science in June 2020 in the Machine Learning Department of ETH Zurich under the supervision of Professor Thomas Hofmann. The focus of his research is optimization for (deep) neural networks.



Toward Reliable Machine Learning with Kernels
Krikamol Muandet | MPI-IS

2022-02-11, 09:30 - 10:30
Virtual talk

Abstract:

Society is made up of a set of diverse individuals, demographic groups, and institutions. Therefore, learning and deploying algorithmic models across heterogeneous environments face a set of various trade-offs. In order to develop reliable machine learning algorithms that can interact successfully with the real world, it is necessary to deal with changes in underlying data-generating distributions. This talk will be about the kernel mean embedding (KME), a nonparametric kernel-based framework to represent probability distributions and model changes thereof. In particular, I will focus on how this framework can help improve the credibility of algorithmic decision-making by enabling us to reason about higher-order causal effects of policy interventions as well as by removing the effect of unobserved confounders through the use of an instrumental variable (IV). Lastly, I will argue that a better understanding of the ways in which our data are generated and how our models can influence them will be crucial for reliable machine learning systems, especially when gaining full information about data may not be possible.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Krikamol Muandet is currently a research group leader in the Empirical Inference Department at the Max Planck Institute for Intelligent Systems (MPI-IS), Tübingen, Germany. Previously, he was a lecturer in the Department of Mathematics at Mahidol University, Bangkok, Thailand. He received his Ph.D. in computer science from the University of Tübingen in 2015 working mainly with Prof. Bernhard Schölkopf. He received his master's degree in machine learning from University College London (UCL), the United Kingdom where he worked mostly with Prof. Yee Whye Teh at Gatsby Computational Neuroscience Unit. He served as a publication chair of AISTATS 2021 and as an area chair for AISTATS 2022, NeurIPS 2021, NeurIPS 2020, NeurIPS 2019, and ICML 2019, among others.



Strengthening and Enriching Machine Learning for Cybersecurity
Wenbo Guo | Penn State University

2022-02-10, 14:00 - 15:00
Virtual talk

Abstract:

Nowadays, security researchers are increasingly using AI to automate and facilitate security analysis. Although making some meaningful progress, AI has not maximized its capability in security yet, mainly due to two challenges. First, existing ML techniques have not reached security professionals' requirements in critical properties, such as interpretability and adversary-resistancy. Second, Security data imposes many new technical challenges, and these challenges break the assumptions of existing ML models and thus jeopardize their efficacy. In this talk, I will describe my research efforts to address the above challenges, with a primary focus on strengthening the interpretability of ML-based security systems and enriching ML to detect concept drifts in security data. Regarding interpretability, I will describe our explanation methods for deep learning-based and deep reinforcement learning-based security systems and demonstrate how security analysts could benefit from these methods to establish trust in blackbox models and patching model vulnerabilities. As for concept drifts, I will introduce a novel ML system to detect and explain drifting samples and demonstrate its application in a real-world malware database. Finally, I will conclude by highlighting my future plan towards maximizing the capability of advanced ML in cybersecurity.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Wenbo Guo is a Ph.D. Candidate at Penn State, advised by Professor Xinyu Xing. His research interests are machine learning and cybersecurity. His work includes strengthening the fundamental properties of machine learning models and designing customized machine learning models to handle security-unique challenges. He is a recipient of the IBM Ph.D. Fellowship (2020-2022), Facebook/Baidu Ph.D. Fellowship Finalist (2020), and ACM CCS Outstanding Paper Award (2018). His research has been featured by multiple mainstream media and has appeared in a diverse set of top-tier venues in security, machine learning, and data mining. Going beyond academic research, he also actively participates in many world-class cybersecurity competitions and has won the 2018 DEFCON/GeekPwn AI challenge finalist award.



Software for Fast Storage Hardware
Willy Zwaenepoel | University of Sydney

2022-01-20, 10:00 - 11:00
Virtual talk

Abstract:

Storage technologies are entering the market with performance vastly superior to conventional storage devices. This technology shift requires a complete rethinking of the software storage stack.

In this talk I will give two examples of our work with Optane-based solid-state (block) devices that illustrate the need for and the benefit of a wholesale redesign.

First, I will describe the Kvell key-value (KV) store. The key observation underlying Kvell is that conventional KV software on fast devices is bottlenecked by the CPU rather than by the device. Kvell therefore focuses on minimizing CPU intervention.

Second, I will describe the Kvell+ OLTP/OLAP system built on top of Kvell. The key underlying observation here is that these storage devices have become so fast that the conventional implementation of snapshot isolation – maintaining multiple versions – quickly leads to the device filling up. Kvell therefore focuses processes new versions as they are created.

This talk describes joint work with Oana Balmau (McGill University), Karan Gupta (Nutanix) and Baptiste Lepers (University of Sydney).

---

Please contact the MPI-SWS Office Team for the ZOOM link information. .

Speaker's bio:

Willy Zwaenepoel received his BS/MS from Ghent University in 1979, and his MS and PhD from Stanford University, in 1980 and 1984, respectively. He is currently dean of engineering at the University of Sydney. Previously, he has been on the faculty at Rice University and head of the school of computer and communication sciences at EPFL.  He has been involved with a number of startups including Nutanix (Nasdaq:NTNX). He was elected IEEE Fellow in 1998 and ACM Fellow in 2000 and received a number of awards for teaching and research, including the Eurosys Lifetime Achievement Award. His main interests are in operating systems and distributed systems.



UDAO: A Next-Generation Cloud Data Analytics Optimizer via Large-Scale Machine Learning
Yanlei Diao | Ecole Polytechnique Paris and UMass Amherst

2021-12-16, 12:00 - 13:00
Virtual talk

Abstract:

Data analytics in the cloud has become an integral part of enterprise businesses. Big data analytics systems, however, still lack the ability to take task objectives such as user performance goals and budgetary constraints and automatically configure an analytical job to achieve these objectives. This talk presents UDAO, a Unified Data Analytics Optimizer, that can automatically determine a cluster configuration with a suitable number of cores as well as other system parameters that best meet the task objectives. At a core of our work is a principled multi-objective optimization (MOO) approach that computes a Pareto optimal set of configurations to reveal tradeoffs between different objectives, recommends a new cluster configuration that best explores such tradeoffs, and employs novel optimizations to enable such recommendations within a few seconds. Such optimization is further enabled by a Deep Learning-based modeling approach that can learn a model for each user objective as complex as necessary for the underlying computing environment. Detailed experiments using a Spark-based prototype and benchmark workloads show that our MOO techniques provide a 2-50x speedup over existing MOO methods, while offering good coverage of the Pareto frontier. Compared to Ottertune, a state-of-the-art performance tuning system, UDAO recommends Spark configurations that yield 26%-49% reduction of running time of the TPCx-BB benchmark while adapting to different user preferences on multiple objectives. This talk ends by outlining remaining research challenges in automated resource management and performance optimization for cloud data analytics.

Speaker's bio:

Yanlei Diao is Professor of Computer Science at the University of Massachusetts Amherst, USA and Ecole Polytechnique, France. Her research interests lie in big data analytics and scalable intelligent information systems, with a focus on optimization in cloud analytics, data stream analytics, explanation discovery, interactive data exploration, and uncertain data management. She received her PhD in Computer Science from the University of California, Berkeley in 2005.

Prof. Diao is a recipient of the 2016 ERC Consolidator Award, 2013 CRA-W Borg Early Career Award (one female computer scientist selected each year for outstanding contributions), IBM Scalable Innovation Faculty Award, and NSF Career Award. She has given keynote speeches at the ACM DEBS Conference, the ExploreDB workshop, and the Distinguished Lecture Series at the IBM Almaden Research Center, the University of Texas at Austin and Technische Universitaet Darmstadt. She has served as Editor-in-Chief of the ACM SIGMOD Record, Associate Editor of ACM TODS, Chair of the ACM SIGMOD Research Highlight Award Committee, and member of the SIGMOD and PVLDB Executive Committees. She was PC Co-Chair of IEEE ICDE 2017 and ACM SoCC 2016, and served on the organizing committees of SIGMOD, PVLDB, and CIDR, as well as on the program committees of many international conferences and workshops. http://www.lix.polytechnique.fr/~yanlei.diao/



Optimal Machine Teaching Without Collusion
Sandra Zilles | University of Regina

2021-11-23, 14:00 - 15:00
Virtual talk

Abstract:

In supervised machine learning, in an abstract sense, a concept in a given reference class has to be inferred from a small set of labeled examples. Machine teaching refers to the inverse problem, namely the problem to compress any concept in the reference class to a "teaching set" of labeled examples in a way that the concept can be reconstructed. The goal is to minimize the worst-case teaching set size taken over all concepts in the reference class, while at the same time adhering to certain conditions that disallow unfair collusion between the teacher and the learner. Applications of machine teaching include multi-agent systems and program synthesis. In this presentation, it is first shown how preference relations over concepts can be used in order to guarantee collusion-free teaching and learning. Intuitive examples are presented in which quite natural preference relations result in data-efficient collusion-free teaching of complex classes of concepts. Further, it is demonstrated that optimal collusion-free teaching cannot always be attained by the preference-based approach. Finally, we will challenge the standard notion of collusion-freeness and show that a more stringent notion characterizes teaching with the preference-based approach. This presentation summarizes joint work with Shaun Fallat, Ziyuan Gao, David G. Kirkpatrick, Christoph Ries, Hans U. Simon, and Abolghasem Soltani.

Speaker's bio:

Dr. Sandra Zilles is a Professor of Computer Science at the University of Regina, where she holds a Canada Research Chair in Computational Learning Theory as well as a Canada CIFAR AI Chair. Her research on machine learning and artificial intelligence is funded by government agencies and industry partners and has led to over 100 peer-reviewed publications. Her main research focus is on theoretical aspects of machine learning, yet some of the methods developed in her lab have found applications in research on autonomous vehicles, in research on genetics, and in cancer research. Dr. Zilles is a member of the College of New Scholars, Artists and Scientists of the Royal Society of Canada, an Associate Editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence, and an Associate Editor for the Journal of Computer and System Sciences. She recently served on the Board of Directors for Innovation Saskatchewan and on the Board of Directors for the Pacific Institute for the Mathematical Sciences (PIMS).



Event-Driven Delay-Induced Tasks: Model, Analysis, and Applications
Federico Aromolo | Scuola Superiore Sant'Anna - Pisa

2021-11-19, 10:00 - 11:00
Virtual talk

Abstract:

Abstract: Support for hardware acceleration and parallel software workloads on heterogeneous multiprocessor platforms is becoming increasingly relevant in the design of high-performance and power-efficient real-time embedded systems. Communication between jobs dispatched on different cores and specialized hardware accelerators such as FPGAs and GPUs is most often implemented using asynchronous events. The delays incurred by each task due to the time spent waiting for such events should appropriately be accounted for in the timing analysis of the resulting scheduling behavior. This talk presents the event-driven delay-induced (EDD) task model, which is suitable to represent and analyze the timing behavior of complex computing workloads that incur event-related delays in the communication and synchronization between different processing elements. The EDD task model generalizes several existing task models, providing enhanced expressiveness towards the timing analysis of parallel processing workloads that involve both synchronous and asynchronous hardware acceleration requests. Two analysis techniques for EDD tasks executing on single-core platforms under fixed-priority scheduling are presented; then, a model transformation technique is provided to analyze parallel real-time tasks executing under partitioned multiprocessor scheduling by means of a set of EDD tasks. In the experiments, partitioned scheduling of parallel tasks is shown to outperform federated scheduling when the proposed analysis approach is combined with specialized partitioning heuristics.

Please contact the office team for link information.

Speaker's bio:

Federico Aromolo is a Ph.D. student in Emerging Digital Technologies (Embedded Systems curriculum) at the Scuola Superiore Sant'Anna (Pisa, Italy), where he works at the Real-Time Systems Laboratory (ReTiS Lab) under the supervision of Prof. Giorgio Buttazzo and Prof. Alessandro Biondi. He holds a Master of Science degree in Embedded Computing Systems from the Scuola Superiore Sant'Anna and the University of Pisa, and a Bachelor of Science degree in Computer Engineering from the University of Pisa, both achieved with highest honors. His research interests are in the area of real-time embedded systems and include real-time scheduling and synchronization algorithms, design and implementation of embedded and cyber-physical systems, real-time operating systems, and advanced robotics and artificial intelligence applications.



Database Systems 2.0
Johannes Gehrke | Microsoft Research

2021-11-15, 16:00 - 17:00
Virtual talk

Abstract:

Software 2.0 – the augmentation and replacement of traditional code with models, especially deep neural networks – is changing how we develop, deploy, and maintain software. In this talk, I will describe the challenges and opportunities that this change brings with it, focusing on its impact on database systems.

Speaker's bio:

Johannes Gehrke is a Technical Fellow at Microsoft and the Managing Director of Microsoft Research at Redmond and the CTO and head of machine learning for Microsoft Teams. He has received a National Science Foundation Career Award, an Arthur P. Sloan Fellowship, a Humboldt Research Award, the 2011 IEEE Computer Society Technical Achievement Award, the 2021 ACM SIGKDD Innovation Award, and he is an ACM Fellow and an IEEE Fellow. From 1999 to 2015, Johannes was on the faculty in the Department of Computer Science at Cornell University where he graduated 25 PhD students, and from 2005 to 2008, he was Chief Scientist at FAST Search and Transfer.



Enzian: a cache-coherent heterogeneous research computer
Timothy Roscoe | ETH Zurich

2021-11-03, 10:00 - 11:00
Virtual talk

Abstract:

Enzian is a research computer built at ETH Zurich which combines a server-class CPU with a large FPGA in an asymmetric 2-socket NUMA system. It is designed to be used individually or in a cluster to explore the design space for future hardware and its implications for system software. Enzian is deliberately over-engineered, and (as I'll show) can replicate the use-cases of almost all other FPGA platforms used in academic research today. Perhaps unique to Enzian is exposing the CPU's native cache-coherence protocol to applications on the FPGA, and I'll discuss the additional opportunities this offers for research as well as the challenges we faced in interoperating with an existing coherence protocol not designed for this use-case. There are nine Enzian systems operational so far, being used locally at ETH and remotely by collaborators.

---

Please contact the MPI-SWS Office Team for the ZOOM link information. .

Speaker's bio:

Timothy Roscoe is a Full Professor in the Systems Group of the Computer Science Department at ETH Zurich, where he works on operating systems, networks, and distributed systems, and is currently head of department.

Mothy received a PhD in 1995 from the Computer Laboratory of the University of Cambridge, where he was a principal designer and builder of the Nemesis OS. After three years working on web-based collaboration systems at a startup in North Carolina, he joined Sprint's Advanced Technology Lab in Burlingame, California in 1998, working on cloud computing and network monitoring. He joined Intel Research at Berkeley in April 2002 as a principal architect of PlanetLab, an open, shared platform for developing and deploying planetary-scale services. Mothy joined the Computer Science Department ETH Zurich in January 2007, and was named Fellow of the ACM in 2013 for contributions to operating systems and networking research.

His work has included the Barrelfish multikernel research OS, as well as work on distributed stream processors, and using formal specifications to describe the hardware/software interfaces of modern computer systems. Mothy's current research centers on Enzian, a powerful hybrid CPU/FPGA machine designed for research into systems software.



Concurrent NetKAT: Modeling and analyzing stateful, concurrent networks
Alexandra Silva | Cornell University

2021-10-28, 15:00 - 16:00
Virtual talk

Abstract:

We introduce Concurrent NetKAT (CNetKAT), an extension of the network programming language NetKAT with multiple packets and with operators to specify and reason about concurrency and state in a network We provide a model of the language based on partially ordered multisets, well-established mathematical structures in the denotational semantics of concurrent languages. We prove that CNetKAT is a sound and complete axiomatization of this model, and we illustrate the use of CNetKAT through various examples. More generally, CNetKAT is analgebraic framework to reason about programs with both local and global state. In our model these are, respectively, the packets and the global variable store, but the scope of applications is much more general, including reasoning about hardware pipelines inside an SDN switch. This is joint work with Jana Wagemaker, Nate Foster, Tobia Kappe, Dexter Kozen, and Jurriaan Rot.

Please contact the office team for link information,

Speaker's bio:

Alexandra Silva is a Professor at the Computer Science Department at Cornell University. Her main research focuses on the modular development of specification languages and algorithms for models of computation. A lot of the work is developed from unifying perspective offered by coalgebra, a mathematical framework established in the last decades.



Algebra-based Analysis of Polynomial Probabilistic Programs
Laura Kovacs | TU Wien

2021-09-22, 10:00 - 11:00
Virtual talk

Abstract:

We present fully automated approaches to safety and termination analysis of probabilistic while-programs whose guards and expressions are polynomial expressions over random variables and parametrised distributions. We combine methods from symbolic summation and statistics to derive invariants as valid properties over higher-order moments, such as expected values or variances, of program variables, synthesizing this way quantitative invariants of probabilistic program loops. We further extend our moments-based analysis to prove termination of considered probabilistic while-programs. This is a joint work with Ezio Bartocci, Joost-Pieter Katoen, Marcel Moosbrugger and Miroslav Stankovic.

--

Please contact the MPI-SWS Office Team for the ZOOM link information.

Speaker's bio:

Laura Kovacs is a full professor in computer science at the TU Wien, leading the automated program reasoning (APRe) group of the Formal Methods in Systems Engineering Division. Her research focuses on the design and development of new theories, technologies, and tools for program analysis, with a particular focus on automated assertion generation, symbolic summation, computer algebra, and automated theorem proving. She is the co-developer of the Vampire theorem prover and a Wallenberg Academy Fellow of Sweden. Her research has also been awarded with a ERC Starting Grant 2014, an ERC Proof of Concept Grant 2018 and an ERC Consolidator Grant 2020.



Validating models for microarchitectural security
Frank Piessens | Katholieke Universiteit Leuven

2021-09-15, 10:30 - 12:00
Virtual talk

Abstract:

Microarchitectural security is one of the most challenging and exciting problems in system security today. With the discovery of transient execution attacks, it has become clear that microarchitectural attacks have significant impact on the security properties of software running on a processor that runs code from various stakeholders (such as, for instance, in the cloud). This talk will first provide an overview of the current understanding of microarchitectural security, with a focus on how the research community has built formal models for processors that support proving that software is resilient to specific classes of microarchitectural attacks. Next, we turn to the problem of validating these proposed formal models: how can we convince ourselves and others that a given formal model is an adequate model for a given real-world processor, and that we can justifiably trust the security properties proven based on the model. This is an instance of the more general problem of empirically validating whether a real-world system satisfies the assumptions on which a formal model relies. We will discuss a small case study where we empirically validated a formally proven security property of a simple processor by systematically attacking the corresponding real-world implementation of the processor. We end with some conclusions and reflections on how our experiences from this case study might help us build more adequate formal models.

--

Please contact the MPI-SWS Office Team for the ZOOM link information.

Speaker's bio:

Frank Piessens is a full professor in the Department of Computer Science at the Katholieke Universiteit Leuven, Belgium. His research field is software and system security, where he focuses on the development of high-assurance techniques to deal with implementation-level software vulnerabilities and bugs, including techniques such as software verification, run-time monitoring, hardware security architectures, type systems and programming language design. He has served on the program committee of numerous security and software conferences including ACM CCS, Usenix Security, IEEE Security & Privacy, and ACM POPL. He acted as program chair for the International symposium on Engineering Secure Software and Systems (ESSOS 2014 & 2015), for the International Conference on Principles of Security and Trust (POST 2016) and for the IEEE European Symposium on Security & Privacy (Euro S&P 2018 & 2019).



Fast, optimal, and guaranteed safe controller synthesis
Chuchu Fan | Massachusetts Institute of Technology

2021-08-26, 15:00 - 16:00
Virtual talk

Abstract:

Rigorous approaches based on controller synthesis can generate correct-by-construction controllers that guarantee that the system under control meets some higher-level tasks. By reducing designing and testing cycles, synthesis can help create safe autonomous systems that involve complex interactions of dynamics and decision logic. In general, however, synthesis problems are known to have high computational complexity for high dimensional and nonlinear systems. In this talk, I will present a series of new synthesis algorithms that suggest that these challenges can be overcome and that rigorous approaches are indeed promising. I will talk about how to synthesize controllers for linear systems, nonlinear systems, hybrid systems with both discrete and continuous control variables, and multi-agent systems, with guarantees on the safety and optimality of the solutions.

Please contact the office team for link information.

Speaker's bio:

Chuchu Fan an Assistant Professor in the Department of Aeronautics and Astronautics at MIT. Before that, she was a postdoc researcher at Caltech and got her Ph.D. from the Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign in 2019. She earned her bachelor’s degree from Tsinghua University, Department of Automation, in 2013. Her group at MIT works on using rigorous mathematics including formal methods, machine learning, and control theory for the design, analysis, and verification of safe autonomous systems. Chuchu’s dissertation work "Formal methods for safe autonomy" won the ACM Doctoral Dissertation Award in 2020.



Efficient quantum algorithm for dissipative nonlinear differential equations
Andrew Childs | University of Maryland

2021-07-22, 16:00 - 17:00
Virtual talk

Abstract:

Max Planck Distinguished Speaker Talk in Quantum Computing and Quantum Information

Speaker: Andrew Childs (University of Maryland), https://www.cs.umd.edu/~amchilds/

Title: Efficient quantum algorithm for dissipative nonlinear differential equations

Time: Thursday, July 22nd, 4pm.

Location: https://zoom.us/j/94577321297?pwd=N3l5K1ZtZ3E1aytnWlBkL1FUazNXZz09 Meeting ID: 945 7732 1297 Passcode: 205903

Hosts: Ignacio Cirac (MPQ) and Kurt Mehlhorn (MPI-INF).

Abstract: While there has been extensive previous work on efficient quantum algorithms for linear differential equations, analogous progress for nonlinear differential equations has been severely limited due to the linearity of quantum mechanics. Despite this obstacle, we develop a quantum algorithm for initial value problems described by dissipative quadratic n-dimensional ordinary differential equations. Assuming R<1, where R is a parameter characterizing the ratio of the nonlinearity to the linear dissipation, this algorithm has complexity T^2 poly(log T, log n, log(1/ϵ))/ϵ, where T is the evolution time and ϵ is the allowed error in the output quantum state. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in T. We achieve this improvement using the method of Carleman linearization, for which we give a novel convergence theorem. This method maps a system of nonlinear differential equations to an infinite-dimensional system of linear differential equations, which we discretize, truncate, and solve using the forward Euler method and the quantum linear system algorithm. We also provide a lower bound on the worst-case complexity of quantum algorithms for general quadratic differential equations, showing that the problem is intractable for R≥sqrt(2). Finally, we discuss potential applications of this approach to problems arising in biology as well as in fluid and plasma dynamics.

Based on joint work with Jin-Peng Liu, Herman Kolden, Hari Krovi, Nuno Loureiro, and Konstantina Trivisa.

Speaker's bio:

-



Making Distributed Deep Learning Adaptive
Peter Pietzuch | Imperial College London

2021-07-14, 10:00 - 10:00
Virtual talk

Abstract:

When using distributed machine learning (ML) systems to train models on a cluster of worker machines, users must configure a large number of parameters: hyper-parameters (e.g. the batch size and the learning rate) affect model convergence; system parameters (e.g. the number of workers and their communication topology) impact training performance. Some of these parameters, such as the number of workers, may also change in elastic machine learning scenarios. In current systems, adapting such parameters during training is ill-supported. In this talk, I will describe our recent work on KungFu, a distributed deep learning library for TensorFlow and PyTorch that is designed to enable adaptive and elastic training. KungFu allows users to express high-level Adaptation Policies (APs) that describe how to change hyper-and system parameters during training. APs take real-time monitored metrics (e.g. signal-to-noise ratios) as input and trigger control actions (e.g. cluster rescaling or synchronisation strategy updates). For execution, APs are translated into monitoring and control operators that are embedded in the dataflow graph. APs exploit an efficient asynchronous collective communication layer, which ensures concurrency and consistency of monitoring and adaptation operations.

--

Please contact the MPI-SWS Office Team for the ZOOM link information.

Speaker's bio:

Peter Pietzuch is a Professor of Distributed Systems at Imperial College London, where he leads the Large-scale Data & Systems (LSDS) group (http://lsds.doc.ic.ac.uk). His research work focuses on the design and engineering of scalable, reliable and secure large-scale software systems, with a particular interest in performance, data management and security issues. He has published papers in premier scientific venues, including OSDI/SOSP, SIGMOD, VLDB, ASPLOS, USENIX ATC, EuroSys, SoCC, ICDCS and Middleware. Currently he is a Visiting Researcher with Microsoft Research and serves as the Director of Research in the Department, the Chair of the ACM SIGOPS European Chapter, and an Associate Editor for IEEE TKDE and TCC. Before joining Imperial College London, he was a post-doctoral Fellow at Harvard University. He holds PhD and MA degrees from the University of Cambridge.



Attacks on Hardware: Why You Should Not Do It
Herbert Bos | Vrije Universiteit Amsterdam

2021-06-30, 16:00 - 17:30
Virtual talk

Abstract:

Within a span of just a few years, we have gone from completely trusting our hardware to realising that everything is broken and all our security guarantees are built on sand. Memory chips have fundamental (Rowhammer) flaws that allow attackers to modify data without accessing it and CPUs are full of side channels and transient execution problems that lead to information leakage across pretty much all security boundaries. Combined, these issues have led to a string of high-profile attacks. In this talk, I will discuss some of the developments in such attacks, mostly by means of the attacks in which our group was involved. Although the research was exciting, I will argue that the way we conduct security research on hardware is broken. The problem is that the interests of hardware manufacturers and academics do not align and this is bad for everyone.

--

Please contact MPI-SWS Office Team for Zoom link information.

Speaker's bio:

Herbert Bos is professor of Systems and Network Security at Vrije Universiteit Amsterdam where he co-leads the VUSec research group. He obtained his Ph.D. from Cambridge University Computer Laboratory (UK). Coming from a systems background, he drifted into security a few years ago and never left. His research interests cover all aspects of system-level security and reliability, including topics such as software hardening, exploitation, micro-architectural attacks, binary analysis, fuzzing, side channels, and reverse engineering. He is very proud of his (former) students who are much cleverer than he is.



Domain-Agnostic Accelerators: Efficiency with Programmability
Tulika Mitra | National University of Singapore

2021-06-16, 10:30 - 11:30
Virtual talk

Abstract:

Domain-specific hardware accelerators for graphics, deep learning, image processing, and other tasks have become pervasive to meet the performance and energy-efficiency needs of emerging applications. However, such specializations are inherently at odds with the programmability long enjoyed by software developers from general-purpose processors. In this talk, I will explore the feasibility of programmable, domain-agnostic accelerators that can be morphed and instantiated to specialized accelerators at runtime through software. In particular, I will present Coarse-Grained Reconfigurable Array (CGRA) as a promising approach to offer high accelerator efficiency while supporting diverse tasks through compile-time configurability. The central challenge is efficient spatio-temporal mapping of the applications expressed in high-level programming languages with complex data dependencies, control flow, and memory accesses to the accelerator. We approach this challenge through a synergistic hardware-software co-designed approach with (a) innovations at the architecture level to improve the efficiency of the application execution as well as ease the burden on the compilation, and (b) innovations at the compiler level to fully exploit the architectural optimizations for quality and fast mapping. Finally, I will share systems-level considerations for real-world deployment of domain-agnostic accelerators in the context of edge computing.

Please contact the office team for link information

Speaker's bio:

Tulika Mitra is Provost’s Chair Professor of Computer Science and Vice Provost (Academic Affairs) at the National University of Singapore (NUS). Her research is focused on the design automation of energy-efficient, real-time embedded computing systems. She has authored around two hundred scientific publications in premier international journals and conferences. Her research has been recognized by best paper award and nominations in major conferences. Tulika is recently serving as the Editor-in-Chief of ACM Transactions on Embedded Computing Systems, Program Chair of International Conference on Computer Aided Design (ICCAD) 2021, and General Chair of Embedded Systems Week (ESWEEK) 2020.



Monoculture and Simplicity in an Ecosystem of Algorithmic Decision-Making
Jon Kleinberg | Cornell University

2021-06-02, 15:00 - 16:00
Virtual talk

Abstract:

Algorithms are increasingly used to aid decision-making in high-stakes settings including employment, lending, healthcare, and the legal system. This development has led to an ecosystem of growing complexity in which algorithms and people interact around consequential decisions, often mediated by organizations and firms that may be in competition with one another.

We consider two related sets of issues that arise in this setting. First, concerns have been raised about the effects of algorithmic monoculture, in which multiple decision-makers all rely on the same algorithm. In a set of models drawing on minimal assumptions, we show that when competing decision-makers converge on the use of the same algorithm as part of a decision pipeline, the result can potentially be harmful for social welfare even when the algorithm is more accurate than any decision-maker acting on their own. Second, we consider some of the canonical ways in which data is simplified over the course of these decision-making pipelines, showing how this process of simplification can introduce sources of bias in ways that connect to principles from the psychology of stereotype formation.

The talk will be based on joint work with Sendhil Mullainathan and Manish Raghavan.

Please contact the office team for link information.

Speaker's bio:

Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the US National Academy of Sciences and National Academy of Engineering, and the recipient of MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, and the ACM Prize in Computing



Modularity for Decidability: Formal Reasoning about Decentralized Financial Applications
Mooly Sagiv | Certora and Tel Aviv University

2021-05-26, 10:00 - 11:00
Virtual talk

Abstract:

Financial applications such as Landing and Payment protocols, and their realization in decentralized financial (DeFi) applications in Blockchains, comprise a unique domain where bugs in the code may be exploited by anyone to steal assets. This situation provides unique opportunities for formal verification to enable "move fast and break nothing". Formal verification can be used to detect errors early in the development process and guarantee correctness when a new version of the code is deployed.

I will describe an attempt to automatically verify DeFis and identify potential bugs. The approach is based on breaking the verification of DeFis into decidable verification tasks. Each of these tasks is solved via a decision procedure which automatically generates a formal proof or a test input showing a violation of the specification. In order to overcome undecidability, high level properties are expressed as ghost states and static analysis used to infer how low level programs update ghost states.

--

Please contact MPI-SWS Office Team for Zoom link information

Speaker's bio:

Mooly Sagiv is a professor in the School of Computer Sciences at Tel-Aviv University and a CEO and co-founder of Certora. He is a leading researcher in the area of large scale (inter-procedural) program analysis, and one of the key contributors to shape analysis. His fields of interests include programming languages, compilers, abstract interpretation, profiling, pointer analysis, shape analysis, inter-procedural dataflow analysis, program slicing, and language-based programming environments. Prof. Sagiv is a recipient of a 2013 senior ERC research grant for Verifying and Synthesizing Software Composition. He also served as Member of the Advisory Board of Panaya Inc. and received best-paper awards at PLDI'11 and PLDI'12 for his work on composing concurrent data structures and a ACM SIGSOFT Retrospective Impact Award (2011) for program slicing. He is a recipient of Friedrich Wilhelm Bessel Research Award (2002), He is an ACM fellow and a recipient of Microsoft Research Outstanding Collaborator Award 2016.



Caching: It's not just about Data
Margo Seltzer | University of British Columbia

2021-05-19, 16:30 - 17:30
Virtual talk

Abstract:

Want to speed up data access? Add a cache! Data caching, as the solution to performance woes, is as old as our field. However, we have been less aggressive at caching computation. While memoization is a widely known technique, it is rarely employed as pervasively as data caching. In this talk, I'll present examples of how we've used computational caching in areas ranging from interpretable machine learning to program synthesis to automatic parallelization.

--

Please contact the MPI-SWS office team for Zoom link details.

Speaker's bio:

MARGO I. SELTZER is Canada 150 Research Chair in Computer Systems and the Cheriton Family chair in Computer Science at the University of British Columbia. Her research interests are in systems, construed quite broadly: systems for capturing and accessing data provenance, file systems, databases, transaction processing systems, storage and analysis of graph-structured data, new architectures for parallelizing execution, and systems that apply technology to problems in healthcare.

She is the author of several widely-used software packages including database and transaction libraries and the 4.4BSD log-structured file system. Dr. Seltzer was a co-founder and CTO of Sleepycat Software, the makers of Berkeley DB, recipient of the 2020 ACM SIGMOD Systems Award. She serves on Advisory Council for the Canadian COVID alert app and the Computer Science and Telecommunications Board (CSTB) of the (US) National Academies. She is a past President of the USENIX Assocation and served as the USENIX representative to the Computing Research Association Board of Directors and on the Computing Community Consortium. She is a member of the National Academy of Engineering, a Sloan Foundation Fellow in Computer Science, an ACM Fellow, a Bunting Fellow, and was the recipient of the 1996 Radcliffe Junior Faculty Fellowship. She is recognized as an outstanding teacher and mentor, having received the Phi Beta Kappa teaching award in 1996, the Abrahmson Teaching Award in 1999, the Capers and Marion McDonald Award for Excellence in Mentoring and Advising in 2010, and the CRA-E Undergraduate Research Mentoring Award in 2017.

Professor Seltzer received an A.B. degree in Applied Mathematics from Harvard/Radcliffe College and a Ph. D. in Computer Science from the University of California, Berkeley.



From Correctness to High Quality
Orna Kupferman | Hebrew University, Jerusalem

2021-05-12, 10:10 - 11:10
Virtual talk

Abstract:

In the synthesis problem, we are given a specification over input and output signals, and we synthesize a system that realizes the specification: with every sequence of input signals, the system associates a sequence of output signals so that the generated computation satisfies the specification. The above classical formulation of the problem is Boolean. The talk surveys recent efforts to automatically synthesize reactive systems that are not only correct, but also of high quality. Indeed, designers would be willing to give up manual design only after being convinced that the automatic procedure that replaces it generates systems of comparable quality. We distinguish between behavioral quality, which refers to the way the specification is satisfied, and costs, which refer to resources that the system consumes. We argue that both are crucial for synthesis to become appealing in practice. __________________________________________________________________________

Please contact MPI-SWS Office Team for Zoom link information

Speaker's bio:

-



Distributional analysis of sampling-based RL algorithms
Prakash Panangaden | McGill University and Mila

2021-05-05, 15:00 - 16:00
Virtual talk

Abstract:

Distributional reinforcement learning (RL) is a new approach to RL with the emphasis on the distribution of the rewards obtained rather than just the expected reward as in traditional RL. In this work we take the distributional point of view and analyse a number of sampling-based algorithms such as value iteration, TD(0) and policy iteration. These algorithms have been shown to converge under various assumptions but usually with completely different proofs. We have developed a new viewpoint which allows us to prove convergence using a uniform approach. The idea is based on couplings and on viewing the approximation algorithms as Markov processes in their own right. It originated from work on bisimulation metrics in which I have been working for the last quarter century. This is joint work with Philip Amortila (U. Illinois), Marc Bellemare (Google Brain) and Doina Precup (McGill, Mila and DeepMind).

Please contact MPI-SWS office for the Zoom links

Speaker's bio:

Prakash Panangaden is a Professor of Computer Science at McGill University. His research interests are primarily in theoretical foundations of computer science with a focus on stochastic systems, but ranges from black holes and curved space-time to reinforcement learning. He has received numerous awards, including the Test of Time award at LICS. He is a Fellow of the ACM.



Computational Social Choice and Incomplete Information
Phokion G. Kolaitis | University of California Santa Cruz and IBM Research

2021-04-28, 16:00 - 17:00
Virtual talk

Abstract:

Computational social choice is an interdisciplinary field that studies collective decision making from an algorithmic perspective. Determining the winners under various voting rules is a mainstream area of research in computational social choice. Many such rules assume that the voters provide complete information about their preferences, an assumption that is often unrealistic because typically only partial preference information is available.  This state of affairs has motivated the study of the notions of the necessary winners and the possible winners with respect to a variety of voting rules.

The main aim of this talk is to present an overview of winner determination under incomplete information and to highlight some recent advances in this area, including the development of a framework that aims to create bridges between computational social choice and data management. This framework infuses relational database context into social choice and allows for the formulation of sophisticated queries about voting rules, candidates, winners, issues, and positions on issues. We will introduce the necessary answer semantics and the possible answer semantics to queries and will explore the computational complexity of query evaluation under these semantics.

------------------------------------------------------------

Please contact MPI-SWS Office Team for Zoom link information

Speaker's bio:

Phokion Kolaitis is a Distinguished Research Professor at UC Santa Cruz and a Principal Research Staff Member at the IBM Almaden Research Center. His research interests include principles of database systems, logic in computer science, and computational complexity.



On Probabilistic Program Termination
Joost-Pieter Katoen | RWTH Aachen University

2021-04-21, 10:00 - 11:00
Virtual talk

Abstract:

Program termination is a key question in program verification. This talk considers the termination of probabilistic programs, programs that can describe e.g., randomized algorithms, security algorithms and Bayesian learning.

Probabilistic termination has several nuances. Diverging program runs may occur with probability zero. Such programs are almost surely terminating (AST). If the expected duration until termination is finite, they are positive AST.

This talk presents a simple though powerful proof rule for deciding AST, reports on recent approaches towards automation, and sketches a Dijkstra-like weakest precondition calculus for proving positive AST in a compositional way.

-------------

Please contact MPI-SWS Office Team for Zoom link information

Speaker's bio:

Joost-Pieter Katoen is a distinguished professor at RWTH Aachen University in the Software Modeling and Verification (MOVES) group and is part-time associated to the Formal Methods & Tools Group at the University of Twente (NL). He is interested in model checking, concurrency theory, program analysis and formal semantics. He is a member of the Academia Europaea, received an honorary doctorate from Aalborg University, is ERC Advanced Grant holder and was recently named ACM Fellow.



Logical Foundations of Cyber-Physical Systems
André Platzer | Carnegie Mellon University

2021-04-14, 15:00 - 16:00
Virtual talk

Abstract:

Cyber-physical systems (CPSs) combine cyber capabilities, such as computation or communication, with physical capabilities, such as motion or other physical processes. Cars, aircraft, and robots are prime examples, because they move physically in space in a way that is determined by discrete computerized control algorithms. Designing these algorithms is challenging due to their tight coupling with physical behavior, while it is vital that these algorithms be correct because we rely on them for safety-critical tasks.

This talk highlights some of the most fascinating aspects of the logical foundations for developing cyber-physical systems with the mathematical rigor that their safety-critical nature demands. The underlying logic, differential dynamic logic, provides an integrated specification and verification language for dynamical systems, such as hybrid systems that combine discrete transitions and continuous evolution along differential equations.

In addition to providing a strong theoretical foundation for CPS, differential dynamic logics have been instrumental in verifying many applications, including the Airborne Collision Avoidance System ACAS X, the European Train Control System ETCS, automotive systems, mobile robot navigation, and a surgical robotic system for skull-base surgery. Logic is the foundation to provably transfer safety guarantees about models to CPS implementations. This technology is also the key ingredient behind Safe AI for CPS.

-------

Please contact MPI-SWS office team for zoom link information

Speaker's bio:

André Platzer is a Professor of Computer Science at Carnegie Mellon University. He develops the logical foundations of cyber-physical systems to characterize their fundamental principles and to answer the question how we can trust a computer to control physical processes. Dr. Platzer has a Ph.D. from the University of Oldenburg, Germany, received an ACM Doctoral Dissertation Honorable Mention and NSF CAREER Award, and was named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI's 10 to Watch by the IEEE Intelligent Systems Magazine.



Internet Transparency
Katerina Argyraki | EPFL

2021-03-31, 10:00 - 11:00
Virtual talk

Abstract:

The Internet was meant to be neutral: treat all traffic the same, without discriminating in favor of specific apps, sites, or services. As commercial interests threaten this ideal, many countries have introduced neutrality regulations. Unfortunately, none of them are actually enforceable. In this talk, I will first discuss the challenge of inferring whether a network is neutral or not using solely external observations. Then, I will show that we can go beyond neutrality inference and reach network transparency, in which networks provide explicit hints that enable their users to reliably assess network behavior, including neutrality. Network transparency, however, requires exposing information about what traffic is seen where and when, which can hurt user privacy. I will close by looking at the important question of whether network transparency indeed must come at the cost of reduced anonymity for Internet users.

--

Please contact MPI-SWS Office Team for Zoom link information

Speaker's bio:

Katerina is an associate professor of computer science at EPFL, where she does research on network architecture and systems, with a particular interest in network transparency and neutrality. She received an IRTF applied networking research prize (2020) and Best Paper awards at SOSP (2009) and NSDI (2014), all shared with her students and co-authors. She has been honored with the EuroSys Jochen Liedtke Young Researcher Award (2016) and three teaching awards at EPFL. Prior to EPFL, she worked at Arista Networks from day one, and received her PhD from Stanford (2007)



Functional Synthesis: An Ideal Meeting Ground for Formal Methods and Machine Learning
Kuldeep Meel | National University of Singapore

2021-03-29, 10:00 - 11:00
Virtual talk

Abstract:

Don't we all dream of the perfect assistant whom we can just tell what to do and the assistant can figure out how to accomplish the tasks? Formally, given a specification F(X,Y) over the set of input variables X and output variables Y, we want the assistant, aka functional synthesis engine, to design a function G such that (X,Y=G(X)) satisfies F. Functional synthesis has been studied for over 150 years, dating back Boole in 1850's and yet scalability remains a core challenge. Motivated by progress in machine learning, we design a new algorithmic framework Manthan, which views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. On an extensive and rigorous evaluation over 609 benchmarks, we demonstrate that Manthan significantly improves upon the current state of the art, solving 356 benchmarks in comparison to 280, which is the most solved by a state of the art technique; thereby, we demonstrate an increase of 76 benchmarks over the current state of the art. The significant performance improvements, along with our detailed analysis, highlights several interesting avenues of future work at the intersection of machine learning, constrained sampling, and automated reasoning.

Please contact MPI-SWS office team for link information

Speaker's bio:

Kuldeep Meel is the Sung Kah Kay Assistant Professor in the Computer Science Department of School of Computing at National University of Singapore. His research interests lie at the intersection of Formal Methods and Artificial Intelligence. He is a recipient of 2019 NRF Fellowship for AI, and was named AI's 10 to Watch by IEEE Intelligent Systems in 2020. His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015. He received his Ph.D. (2017) and M.S. (2014) degree from Rice University, and B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology, Bombay



Human Factors in Secure Software Development: How we can help developers write secure code
Yasemin Acar | MPI-SP

2021-03-11, 10:00 - 11:00
Virtual talk

Abstract:

We are seeing a persistent gap between the theoretical security of e.g. cryptographic algorithms and real world vulnerabilities, data-breaches and possible attacks. Software developers – despite being computer experts – are rarely security experts, and security and privacy are usually, at best, of secondary importance for them. They may not have training in security and privacy or even be aware of the possible implications, and they may be unable to allocate time or effort to ensure that security and privacy best practices and design principles are upheld for their end users. Understanding their education and mindsets, their processes, the tools that they use, and their pitfalls are the foundation for shifting development practices to be more secure. This talk will give an overview of security challenges for developers, and interdisciplinary research avenues to address these.

--

Please contact MPI-SWS Office team for link information.

Speaker's bio:

Yasemin Acar is a Research Group Leader at MPI-SP, where she focuses on human factors in computer security. Her research centers humans, their comprehension, behaviors, wishes and needs. She aims to better understand how software can enhance users’ lives without putting their data at risk. Her recent focus has been on human factors in secure development, investigating how to help software developers implement secure software development practices. Her research has shown that working with developers on these issues can resolve problems before they ever affect end users. She was a visiting scholar at the National Institute for Standards and Technology in 2019, where she researched how users of smart homes want to have their security and privacy protected. She received the John Karat Usable Security and Privacy student Research Award for the community’s outstanding student in 2018. Her work has also been honored by the National Security Agency in their best cybersecurity paper competition 2016.



Automatic Vulnerability Discovery at Scale
Marcel Böhme | Monash University, Australia

2021-03-09, 10:00 - 11:00
Virtual talk

Abstract:

To establish software security at scale, we need efficient automated vulnerability discovery techniques that can run on thousands of machines. In this talk, we will discuss the abundant opportunities and fundamental limitations of fuzzing, one of the most successful vulnerability discovery techniques. We will explore why only an exponential number of machines will allow us to discover software bugs at a linear rate. We will discuss the kind of correctness guarantees that we can expect from automatic vulnerability discovery, anywhere from formally proving the absence of bugs to statistical claims about program correctness. We shall touch upon unexpected connections to ecological biostatistics and information theory which allow us to address long-standing scientific and practical problems in automatic software testing. Finally, we will take a forward looking view and discuss our larger vision for the field of software security.

--

Please contact MPI-SWS Office for Zoom link information

Speaker's bio:

Marcel Böhme is a 2019 ARC DECRA Fellow and Senior Lecturer (A/Prof) at Monash University, Australia. He completed his PhD at National University of Singapore advised by Prof Abhik Roychoudhury in 2014. It followed a postdoctoral stint at the CISPA-Helmholtz Zentrum Saarbrücken with Prof. Andreas Zeller and a role as senior research fellow at the TSUNAMi Security Research Centre in Singapore. Marcel leads his group with a reproducibility policy (https://mboehme.github.io/manifesto), such that all tools and data are made available as open-source, and in some cases have been upstreamed for integration into popular fuzzers, such as AFL and LibFuzzer. His fuzzers discovered 100+ bugs in widely-used software systems, more than 60 of which are security-critical vulnerabilities registered as CVEs at the US National Vulnerability Database. His most recent fuzzer, Entropic, powers the two largest continuous fuzzing platforms at Google and Microsoft.



Exterminating bugs in real systems
Fraser Brown | Stanford

2021-03-08, 15:00 - 16:00
Virtual talk

Abstract:

Software is everywhere, and almost everywhere, software is broken. Some bugs just crash your printer; others hand an identity thief your bank account number; still others let nation-states spy on dissidents and persecute minorities. This talk outlines my work preventing bugs using a blend of programming languages techniques and systems design. First, I'll talk about securing massive, security-critical codebases without clean slate rewrites. This means rooting out hard-to-find bugs---as in Sys, which scales symbolic execution to find exploitable bugs in systems like the twenty-million line Chrome browser. It also means proving correctness of especially vulnerable pieces of code---as in VeRA, which automatically verifies part of the Firefox JavaScript engine. Finally, I'll discuss work on stronger foundations for new systems---as in CirC, a recent project unifying compiler infrastructure for program verification, cryptographic proofs, optimization problems, and more.

--

Please contact MPI-SWS Office for link information

Speaker's bio:

Fraser Brown is a PhD student at Stanford advised by Dawson Engler, occasional visiting student at UCSD with Deian Stefan, and NSF graduate research fellowship recipient. She works at the intersection of programming languages, systems, and security, and her research has been used by several companies. She holds an undergraduate degree in English from Stanford.



Advancing Visual Intelligence via Neural System Design
Hengshuang Zhao | University of Oxford

2021-03-05, 09:30 - 10:30
Virtual talk

Abstract:

Building intelligent visual systems is essential for the next generation of artificial intelligence systems. It is a fundamental tool for many disciplines and beneficial to various potential applications such as autonomous driving, robotics, surveillance, augmented reality, to name a few. An accurate and efficient intelligent visual system has a deep understanding of the scene, objects, and humans. It can automatically understand the surrounding scenes. In general, 2D images and 3D point clouds are the two most common data representations in our daily life. Designing powerful image understanding and point cloud processing systems are two pillars of visual intelligence, enabling the artificial intelligence systems to understand and interact with the current status of the environment automatically. In this talk, I will first present our efforts in designing modern neural systems for 2D image understanding, including high-accuracy and high-efficiency semantic parsing structures, and unified panoptic parsing architecture. Then, we go one step further to design neural systems for processing complex 3D scenes, including semantic-level and instance-level understanding. Further, we show our latest works for unified 2D-3D reasoning frameworks, which are fully based on self-attention mechanisms. In the end, the challenges, up-to-date progress, and promising future directions for building advanced intelligent visual systems will be discussed.

--

Please contact MPI-SWS office for link information.

Speaker's bio:

Dr. Hengshuang Zhao is a postdoctoral researcher at the University of Oxford. Before that, he obtained his Ph.D. degree from the Chinese University of Hong Kong. His general research interests cover the broad area of computer vision, machine learning and artificial intelligence, with special emphasis on building intelligent visual systems. He and his team won several champions in competitive international challenges like ImageNet Scene Parsing Challenge. He is recognized as outstanding/top reviewers in ICCV’19 and NeurIPS’19. He receives the rising star award at the world artificial intelligence conference 2020. Some of his research projects are supported by Microsoft, Adobe, Uber, Intel, and Apple. His works have been cited for about 5,000+ times, with 5,000+ GitHub credits and 80,000+ YouTube views.



Post-Moore Systems — Challenges and Opportunities
Antoine Kaufmann | Max Planck Institute for Software Systems

2021-03-04, 10:30 - 11:30
Virtual talk

Abstract:

Satisfying the growing demand for more compute as processor performance continues to stagnate in the post-Moore era requires radical changes throughout the systems stack. A proven strategy is to move from today’s general purpose platforms to post-Moore systems, specialized systems comprising tightly integrated and co-designed hardware and software components. Currently, designing and implementing these systems is a complex, laborious, and risky process, accessible to few and only practical for the most computing intensive applications. In this talk, I discuss the nascent challenges in building post-Moore systems and illustrate them through specific systems I have built. As a first step to address these challenges, I present Simbricks, a modular simulation framework that flexibly combines existing simulators for computers, custom hardware, and networks, into complete virtual post-Moore systems, enabling developers to compare and evaluate designs earlier and quicker. I conclude with a look towards future opportunities in abstractions, tooling, and methodology to further simplify the development of post-Moore systems.



Please contact MPI-SWS Office for link information.

Speaker's bio:

Antoine Kaufmann is a research group leader at the Max Planck Institute for Software Systems (MPI-SWS). In his research, he builds computer systems at the intersection of software and hardware with a focus on performance and efficiency. Antoine received his PhD from the University of Washington in 2018, and his MSc and BSc from ETH Zürich in 2014 and 2012 respectively.



New Advances in (Adversarially) Robust and Secure Machine Learning
Hongyang Zhang | Toyota Technological Institute at Chicago

2021-03-03, 09:30 - 10:30
Virtual talk

Abstract:

In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty quantification and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic control and computer vision. I will also introduce a survey of other real-world applications that would benefit from this framework for future work.

--

Please contact MPI-SWS Office for Zoom link information

Speaker's bio:

Hongyang Zhang is a Postdoc fellow at Toyota Technological Institute at Chicago, hosted by Avrim Blum and Greg Shakhnarovich. He obtained his Ph.D. from CMU Machine Learning Department in 2019, advised by Maria-Florina Balcan and David P. Woodruff. His research interests lie in the intersection between theory and practice of machine learning, robustness and AI security. His methods won the championship or ranked top in various competitions such as the NeurIPS’18 Adversarial Vision Challenge (all three tracks), the Unrestricted Adversarial Examples Challenge hosted by Google, and the NeurIPS’20 Challenge on Predicting Generalization of Deep Learning. He also authored a book in 2017.



Towards Trustworthy AI: Provably Robust Extrapolation for Decision Making
Anqi Liu | California Institute of Technology

2021-03-02, 17:00 - 18:00
Virtual talk

Abstract:

To create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures. For example, we must account for the uncertainty and guarantee the performance for safety-critical systems, like in autonomous driving and health care, before deploying them in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples. In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty quantification and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic control and computer vision. I will also introduce a survey of other real-world applications that would benefit from this framework for future work.

--

Please contact MPI-SWS office team for Zoom link information

Speaker's bio:

Anqi (Angie) Liu is a postdoctoral scholar research associate at the Department of Computing and Mathematical Sciences in the California Institute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chicago. She is interested in machine learning for safety-critical tasks and the societal impact of AI. She aims to design principled learning methods and collaborate with domain experts to build more reliable systems for the real world. She has been selected for the EECS Rising Star in UC Berkeley 2020. Her publication appears in prestigious machine learning conferences like Neurips, ICML, ICLR, AAAI, and AISTAT.



Opening the Black Box: Towards Theoretical Understanding of Deep Learning
Hu Wei | Princeton University, USA

2021-03-01, 14:00 - 15:00
Virtual talk

Abstract:

Despite the phenomenal empirical successes of deep learning in many application domains, its underlying mathematical mechanisms remain poorly understood. Mysteriously, deep neural networks in practice can often fit training data perfectly and generalize remarkably well to unseen test data, despite highly non-convex optimization landscapes and significant over-parameterization. Moreover, deep neural networks show extraordinary ability to perform representation learning: feature representation extracted from a trained neural network can be useful for other related tasks.

In this talk, I will present our recent progress on building the theoretical foundations of deep learning, by opening the black box of the interactions among data, model architecture, and training algorithm. First, I will show that gradient descent on deep linear neural networks induces an implicit regularization effect towards low rank, which explains the surprising generalization behavior of deep linear networks for the low-rank matrix completion problem. Next, turning to nonlinear deep neural networks, I will talk about a line of studies on wide neural networks, where by drawing a connection to the neural tangent kernels, we can answer various questions such as how training loss is minimized, why trained network can generalize, and why certain component in the network architecture is useful; we also use theoretical insights to design a new simple and effective method for training on noisily labeled datasets. Finally, I will analyze the statistical aspect of representation learning, and identify key data conditions that enable efficient use of training data, bypassing a known hurdle in the i.i.d. tasks setting.

--

Please contact the MPI-SWS office team for link information.

Speaker's bio:

Wei Hu is a PhD candidate in the Department of Computer Science at Princeton University, advised by Sanjeev Arora. Previously, he obtained his B.E. in Computer Science from Tsinghua University. He has also spent time as a research intern at research labs of Google and Microsoft. His current research interest is broadly in the theoretical foundations of modern machine learning. In particular, his main focus is on obtaining solid theoretical understanding of deep learning, as well as using theoretical insights to design practical and principled machine learning methods. He is a recipient of the Siebel Scholarship Class of 2021.



Measuring and Enhancing the Security of Machine Learning
Florian Tramer | Stanford

2021-02-25, 17:00 - 18:00
Virtual talk

Abstract:

Failures of machine learning systems can threaten both the security and privacy of their users. My research studies these failures from an adversarial perspective, by building new attacks that highlight critical vulnerabilities in the machine learning pipeline, and designing new defenses that protect users against identified threats. In the first part of this talk, I'll explain why machine learning models are so vulnerable to adversarially chosen inputs. I'll show that many proposed defenses are ineffective and cannot protect models deployed in overtly adversarial settings, such as for content moderation on the Web. In the second part of the talk, I'll focus on the issue of data privacy in machine learning systems, and I'll demonstrate how to enhance privacy by combining techniques from cryptography, statistics, and computer security.

--

Please contact MPI-SWS Office for link information

Speaker's bio:

Florian Tramèr is a PhD student at Stanford University advised by Dan Boneh. His research interests lie in Computer Security, Cryptography and Machine Learning security. In his current work, he studies the worst-case behavior of Deep Learning systems from an adversarial perspective, to understand and mitigate long-term threats to the safety and privacy of users. Florian is supported by a fellowship from the Swiss National Science Foundation and a gift from the Open Philanthropy Project.



Data-Centric Debugging or: How I Learned to Stop Worrying and Use 'Big Data' Techniques to Diagnose Software Bugs
Andrew Quinn | University of Michigan

2021-02-24, 14:00 - 15:00
Virtual talk

Abstract:

Software bugs are pervasive and costly.  As a result, developers spend the majority of their time debugging their software.  Traditionally, debugging involves inspecting and tracking the runtime behavior of a program.  Alas, program inspection is computationally expensive, especially when employing powerful techniques such as dynamic information flow tracking, data-race detection, and data structure invariant checks.  Moreover, debugging logic is difficult to specify correctly.  Current tools (e.g., gdb, Intel Pin) allow developers to write debugging logic in an imperative inline programming model that mirrors the programming style of traditional software.  So, debugging logic faces the same challenges as traditional software, including concurrency, fault handling, dynamic memory, and extensibility.  In general, specifying debugging logic can be as difficult as writing the program being debugged!

In this talk, I will describe a new data-centric debugging framework that alleviates the performance and specification limitations of current debugging models.  The key idea is to use deterministic record and replay to treat a program execution as a massive data object consisting of all program states reached during the execution.  In this framework, developers can express common debugging tasks (e.g., tracking the value of a variable) and dynamic analyses (e.g., data-race detection) as queries over an execution's data object.  My research explores how a data-centric model enables large-scale parallelism to accelerate debugging queries (JetStream and SledgeHammer) and relational query models to simplify the specification of debugging logic (SledgeHammer and the OmniTable Query Model).

--

Please contact MPI-SWS office for Zoom link information

Speaker's bio:

Andrew Quinn is a sixth-year graduate student at the University of Michigan working with Jason Flinn and Baris Kasikci.  He received an NSF fellowship (2017) and an MSR fellowship (2017).  His research investigates systems, tools, techniques to make software more reliable.  His dissertation focuses on solutions that help developers better understand their software for tasks such as debugging, security forensics, and data provenance.  In addition, he has recently been working on solutions to improve the reliability of applications that use emerging hardware, including persistent memory, heterogeneous systems, and edge computing.

Before Michigan, Andrew attended Dension University where received degrees in Computer Science and Mathematics.  Between his undergraduate and Ph.D. studies, he worked as a software engineer at IBM.  When not working, Andrew is likely wrestling with his dogs, enjoying a long run around Ann Arbor MI, or baking something sweet.



Towards an Actionable Understanding of Conversations
Justine Zhang | Cornell University

2021-02-23, 15:00 - 16:00
Virtual talk

Abstract:

Conversations are central to our social systems. Understanding how conversationalists navigate through them could unlock great improvements in domains like public health, where the provision of social support is crucial. To this end, I develop computational frameworks that can capture and systematically examine aspects of conversations that are difficult, interesting and meaningful for conversationalists and the jobs they do. Importantly, these frameworks aim to yield actionable understandings—ones that reflect the choices that conversationalists make and their consequences, beyond the inert linguistic patterns that are produced in the interaction.

Please contact MPI-SWS Office for link information.

Speaker's bio:

Justine Zhang is a PhD Candidate in the Information Science department at Cornell University. She focuses on developing computational frameworks to study conversations. Her research engages with a wide range of fields, spanning natural language processing, computational social science, political science, psychological counseling, and economics. Previously, she completed a bachelor's degree in computer science at Stanford University. She is a recipient of the Microsoft PhD Fellowship.



Algorithmic Approaches in Finite-ModelTheory With Interdisciplinary Applications
Sandra Kiefer | RWTH Aachen University

2021-02-22, 10:30 - 11:30
Virtual talk

Abstract:

Graphs are widespread models for relations between entities. One of the fundamental problems when dealing with graphs is to decide isomorphism, i.e., to check whether two graphs are structurally identical. Even after decades of research, the quest for an efficient graph-isomorphism test still continues. In this talk, I will discuss the Weisfeiler-Leman (WL) algorithm as a powerful combinatorial procedure to approach the graph-isomorphism problem. The algorithm can be seen as a link between many research areas (the ""WL net""), including, for example, descriptive complexity theory, propositional proof complexity, and machine learning. I will present work regarding the two central parameters of the algorithm – its dimension and the number of iterations – and explore their connection to finite-model theory. I will also touch on some past and ongoing projects in other areas from the WL net.

--

Please contact MPI-SWS office team for link information.

Speaker's bio:

Sandra Kiefer completed her undergraduate studies in Bioinformatics and Mathematics at Goethe University Frankfurt, with theses on the Graph Isomorphism Problem and the diameter of polytopes under the supervision of Nicole Schweikardt and Thorsten Theobald, respectively. During her Master’s studies in Mathematics, she further specialised in Group Theory and Combinatorial Optimisation, including one year at the University of Santander in Spain. After obtaining the M.Sc., she moved to RWTH Aachen University, where she has been working on combinatorial algorithms in the context of the Graph Isomorphism Problem under the supervision of Martin Grohe and Pascal Schweitzer. Her research visits to the Australian National University in Canberra and the University of Warsaw have led to collaborations with Brendan McKay and Mikołaj Bojańczyk. In addition to her research, she successfully completed an M.Ed. degree in Mathematics and Spanish.

Sandra obtained her Ph.D. in Computer Science in March 2020. As a postdoctoral researcher, she has continued her work at Martin Grohe’s chair at RWTH Aachen University and has established interdisciplinary collaborations broadening her field of research. She is currently on leave from RWTH Aachen on a temporary position at the University of Warsaw, where she engages in a continuation of her project on a higher-order functional programming language with Mikołaj Bojańczyk.



What Models do we Need in Computer Vision? From Optical Flow to Scene Representations
Eddy Ilg | University of Freiburg, Germany

2021-02-18, 16:00 - 17:00
Virtual talk

Abstract:

Deep learning today is successful in almost any domain of computer vision. The talk will revisit the seminal work of FlowNet to show how deep learning was applied to optical flow and led to a paradigm shift in this domain. Optical flow, disparity, motion and depth boundaries as well as uncertainty estimation with multi-hypothesis networks will be covered and it will be discussed how deep learned models could surpass traditional methods. Asking the more fundamental question what models we need in computer vision, the talk will then progress to recent deep-learned scene representation approaches such as the ones obtained by learned signed distance functions and NeRF and provide a perspective on how computer vision might change in the future.

--

Please contact the MPI-SWS office team for link information.

Speaker's bio:

Eddy holds Master degrees from the University of Southern California in artificial intelligence and from the University of Freiburg in robotics and computer vision. He did his PhD under Thomas Brox and is known for his work on estimating optical flow with convolutional neural networks. Currently, Eddy is a senior research scientist in industry in the domain of augmented reality working on 3D reconstruction and neural scene representations with a focus on object reconstruction in the wild.



Breaking the chains of implicit trust
Riad Wahby | Stanford

2021-02-17, 15:00 - 16:00
Virtual talk

Abstract:

The success of today's hardware and software systems is due in part to a mature toolbox of techniques, like abstraction, that systems designers use to manage complexity. While powerful, these techniques are also subtly dangerous: they induce implicit trust relationships among system components and between related systems, presenting attackers with many opportunities to undermine the integrity of our hardware and software. This talk discusses an approach to building systems with precise control over trust, drawing on techniques from theoretical computer science. Making this approach practical is a challenge that requires innovation across the entire technology stack, from hardware to theory. I will present several examples of such innovations from my research and describe a few potential directions for future work.

--

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Riad S. Wahby is a Ph.D. candidate at Stanford, advised by Dan Boneh and Keith Winstein. His research interests include systems, computer security, and applied cryptography. Prior to attending Stanford, Riad spent ten years as an analog and mixed-signal integrated circuit designer. Riad and his collaborators received a 2016 IEEE Security and Privacy Distinguished Student Paper award; his work on hashing to elliptic curves is being standardized by the IETF.



Using Data More Responsibly
Juba Ziani | University of Pennsylvania

2021-02-16, 15:00 - 16:00
Virtual talk

Abstract:

Data is now everywhere: enormous amounts of data are produced and processed every day. Data is gathered,exchanged, and used extensively in computations that serve many purposes: e.g., computing statistics on populations, refining bidding strategies in ad auctions, improving recommendation systems, and making loan or hiring decisions. Yet, data is not always transacted and processed in a responsible manner. Data collection often happens without the data holders' consent, who may also not be compensated for their data. Privacy leaks are numerous, exhibiting a need for better privacy protections on personal and sensitive data. Data-driven machine learning and decision making algorithms have been shown to both mimic past bias and to introduce additional bias in their predictions, leading to inequalities and discrimination. In this talk, I will focus on my research on using data in a more responsible manner. The main focus of the talk will be on my work on the privacy issues that arise in data transactions and data-driven analysis, under the lens of a framework known as differential privacy. I will go over my work on designing transactions for data where we provide differential privacy guarantees to the individuals whose sensitive data we are buying and using in computations, and will focus on my recent work on providing differential privacy to agents in auction settings, where it is natural to want to protect the valuations and bids of said agents. I will also give a brief overview of the other directions that I have followed in my research, both on the optimization and economic challenges that arise when letting agents opt in and out of data sharing and compensating them sufficiently for their data contributions, and on how to reduce the disparate and discriminatory impact of data-driven decision-making.

--

Please contact MPI-SWS office team for Zoom link information

Speaker's bio:

Juba Ziani is a Warren Center Postdoctoral Fellow at the University of Pennsylvania, hosted by Sampath Kannan, Michael Kearns, Aaron Roth, and Rakesh Vohra. Prior to this, he was a PhD student at Caltech in the Computing and Mathematical Sciences department, where he was advised by Katrina Ligett and Adam Wierman. Juba studies the optimization, game theoretic, economic, ethical, and societal challenges that arise from transactions and interactions involving data. In particular, his research focuses on the design of markets for data, on data privacy with a focus on "differential privacy", on fairness in machine learning and decision- making, and on strategic considerations in machine learning.



Building Scalable Network Stacks for Modern Applications
Ahmed Saeed | MIT

2021-02-15, 14:00 - 15:00
Virtual talk

Abstract:

The network stack in today's operating systems is a remnant from a time when a server had a handful of cores and processed requests from a few thousand clients. It simply cannot keep up with the scale of modern servers and the requirements of modern applications. Specifically, real-time applications and high user expectations enforce strict performance requirements on the infrastructure. Further, there is a fundamental shift in the way hardware capacity scales from simply relying on Moore's law to deliver faster hardware every couple of years to leveraging parallel processing and task-specific accelerators. This talk covers innovations in three key components of the network stack. First, I will cover my work on scalable packet scheduling in software network stacks, improving the control of traffic outgoing from large-scale servers. Second, I will move on to my work on improving overload control for servers handling microsecond-scale remote procedure calls, providing better control over incoming traffic to large-scale servers. Then, the talk covers my work on Wide Area Network (WAN) congestion control, focusing on network-assisted congestion control schemes, where end-to-end solutions fail. The talk will conclude with a discussion of plans for future research in this area.

--

Please contact MPI-SWS office team for Zoom link information

Speaker's bio:

Ahmed Saeed is a postdoctoral associate at MIT working with Prof. Mohammad Alizadeh. His research interests broadly cover the theory, design, and implementation of scalable computer networks and systems, including resource scheduling, congestion control, wireless networks, and cyber-physical systems. Before joining MIT, Ahmed received his PhD in computer science from Georgia Tech, where he was advised by Prof. Mostafa Ammar and Prof. Ellen Zegura. His PhD was partially supported by the Google PhD Fellowship in Systems and Networking. He received his bachelor's degree from Alexandria University in 2010.



Data-Driven Transfer of Insight between Brains and AI Systems
Mariya Toneva | Carnegie Mellon University, USA

2021-02-11, 15:00 - 16:00
Virtual talk

Abstract:

Several major innovations in artificial intelligence (AI) (e.g. convolutional neural networks, experience replay) are based on findings about the brain. However, the underlying brain findings took many years to first consolidate and many more to transfer to AI. Moreover, these findings were made using invasive methods in non-human species. For cognitive functions that are uniquely human, such as natural language processing, there is no suitable model organism and a mechanistic understanding is that much farther away. In this talk, I will present my research program that circumvents these limitations by establishing a direct connection between the human brain and AI systems with two main goals: 1) to improve the generalization performance of AI systems and 2) to improve our mechanistic understanding of cognitive functions. Lastly, I will discuss future directions that build on these approaches to investigate the role of memory in meaning composition, both in the brain and AI. This investigation will lead to methods that can be applied to a wide range of AI domains, in which it is important to adapt to new data distributions, continually learn to perform new tasks, and learn from few samples.

Please contact MPI-SWS Office Team for link information

Speaker's bio:

Mariya Toneva is a Ph.D. candidate in a joint program between Machine Learning and Neural Computation at Carnegie Mellon University, where she is advised by Tom Mitchell and Leila Wehbe. She received a B.S. in Computer Science and Cognitive Science from Yale University. Her research is at the intersection of Artificial Intelligence, Machine Learning, and Neuroscience. Mariya works on bridging language in machines with language in the brain, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems.



Vellvm: Verifying LLVM IR Code
Steve Zdancewic | University of Pennsylvania

2021-01-13, 15:00 - 16:00
Virtual talk

Abstract:

LLVM is an industrial-strength compiler that's used for everything from day-to-day iOS development (in Swift) to pie-in-the-sky academic research projects. This makes the LLVM framework a sweet spot for bug-finding and verification technologies--any improvements to it are amplified across its many applications.

This talk asks the question: what does LLVM code _mean_, and, how can we ensure that LLVM-based tools (compilers, optimizers, code instrumentation passes, etc.) do what they're supposed to -- especially for safety- or security-critical applications? The Verified LLVM project (Vellvm) is our attempt to provide an answer. Vellvm gives a semantics to LLVM IR programs in the Coq interactive theorem prover, which can be used for developing machine-checkable formal properties about LLVM IR programs and transformation passes.

Our approach to modeling LLVM IR semantics uses _interaction trees_, a data structure that is suitable for representing impure, possibly nonterminating programs in dependent type theory. Interaction trees support compositional and modular reasoning about program semantics but are also executable, and hence useful for implementation and testing. We'll see how interaction trees are used in Vellvm and, along the way, we'll get a taste of what LLVM code looks like including some of its trickier semantic aspects. We'll also see (at a high level) how modern interactive theorem provers--in this case, Coq--can be used to verify compiler transformations.

No experience with LLVM or formal verification technologies will be assumed.

--

Please contact office for the Zoom details.

Speaker's bio:

Dr. Zdancewic is a Full Professor and Associate Department Chair in Computer and Information Science at the University of Pennsylvania. He received his Ph.D. in Computer Science from Cornell University in 2002, and he graduated from Carnegie Mellon University with a B.S. in Computer Science and Mathematics in 1996. He is the recipient of an NSF Graduate Research Fellowship, an Intel fellowship, an NSF CAREER award, and a Sloan Fellowship. His numerous publications in the areas of programming languages and computer security include several best paper awards.

Dr. Zdancewic's research centers around using programming languages technology to help build secure and reliable software. He has worked on type-based enforcement of both information-flow and authorization policies, compiler techniques for ensuring memory safety of legacy C code, and, more recently, on using interactive theorem-proving technology to construct highly-trustworthy compiler optimization passes. His interests also include type theory and linear logics, and applications of those ideas.



Distributed synthesis and negotiations
Anca Muscholl | Universite de Bordeaux

2020-12-04, 10:00 - 11:00
Virtual talk

Abstract:

This talk will be a survey of instances of the synthesis problem in distributed models with rendez-vous synchronization. I will talk about synthesis for distributed automata within the theory of Mazurkiewicz traces and about the simpler model of negotiation diagrams (aka as workflow nets). 

---

Please contact Office for Zoom details. 

Speaker's bio:

-



Towards Human Behavior Modeling from (Big) Data: From smart rooms, cars and phones to COVID-19
Nuria Oliver | ELLIS

2020-12-02, 16:00 - 17:00
Virtual talk

Abstract:

Human Behavior Modeling and Understanding is a key challenge in the development of intelligent systems. In my talk I will describe a few of the projects that I have carried out over the course of the past 25 years to address this challenge. In particular, I will give an overview of my work on Smart Rooms (real-time facial expression recognition and visual surveillance), Smart Cars (driver maneuver recognition), Smart Offices (multi-modal office activity recognition), Smart Mobile Phones (boredom inference) and finally a Smart World (pandemics and specifically COVID-19). On this last area, I will share the lessons learned during the past 8 months as Commissioner to the President of the Valencian Government in Spain on AI and Data Science against COVID-19. It is a very special initiative of collaboration between the civil society at large (through the survey), the scientific community and a public administration (at the President level)

--

Please contact office for Zoom link details

Speaker's bio:

Oliver, Nuria, PhD Co-founder and Vicepresident of ELLIS, The European Laboratory for Learning and Intelligent Systems Chief Data Scientist, Data-Pop Alliance, New York, USA and Spain Commissioner for the President of the Valencian Government on AI and Data Science against COVID-19

Nuria Oliver is Chief Data Scientist at Data-Pop Alliance, Chief Scientific Advisor at the Vodafone Institute, co-founder and vice-president of ELLIS (The European Laboratory for Learning and Intelligent Systems) and co-founder of the Alicante ELLIS Unit, devoted to research on "Human(ity)-centric Artificial Intelligence". She is a Telecommunications Engineer from the Universidad Politécnica de Madrid and holds a PhD in Artificial Intelligence from the Massachusetts Institute of Technology (MIT). In March 2020, she was named Commissioner for the President of the Valencian Region on AI Strategy and Data Science to fight Covid-19. Since then, she has led a team of with 20+ data scientists. She is an independent member of the Board of Directors at Bankia. She has over 25 years of research experience in the areas of human behavior modeling and prediction from data and human-computer interaction. She has been a researcher at Microsoft Research (Redmond, WA), the first female Scientific Director at Telefonica R&D for over 8 years and the first Director of Research in Data Science at Vodafone globally (2017-2019). Her work in the computational modeling of human behavior using Artificial Intelligence techniques, human-computer interaction, mobile computing and Big Data analysis - especially for the Social Good is well known with over 160 scientific publications that have received more than 18,000 citations and a ten best paper award nominations and awards. She is co-inventor of over 40 filed patents and she is a regular keynote speaker at international conferences. Her work has contributed to the improvement of services, the creation of new services, the definition of business strategies and the creation of new companies. Nuria is the only Spanish researcher recognized by the ACM as Distinguished Scientist (2015) and Fellow (2017) at the same time. She is also a Fellow of the IEEE (2017) and the European Association for Artificial Intelligence (2016). She has received an Honorary Doctorate from the Miguel Hernandez University (2018). Dr. Oliver is the youngest and fourth female member of the Spanish Royal Academy of Engineering (2018) and an elected member of the Academia Europaea (2016) and CHI Academy (2018), where she is the only Spanish scientist.

As an advisor, Dr. Oliver is a member of the scientific advisory committee of several European universities, the Gadea Ciencia Foundation, Mahindra Comviva and the Future Digital Society, among others. In addition, is/has been an advisor to the Spanish, Belgian and Valencian Governments, the European Commission and the World Economic Forum on issues related to Artificial Intelligence.

Dr. Oliver is a member of the program committee of the main international congresses in her research areas. She has also been a member of the organizing committee of 19 international conferences and is a member of the editorial committee of five international magazines.

Dr. Oliver's work has been recognized internationally with numerous awards. She graduated top of her class at the UPM and received the First National Telecommunications Award (1004). She is the first Spanish scientist to receive the MIT TR100 (today TR35) Young Innovator Award (2004) and the Rising Talent award by the Women's Forum for the Economy and Society (2009). She has been awarded Data Scientist of the Year in Europe (2019), Engineer of the Year Award by the COIT (2018), the Medal for Business and Social Merit by the Valencian Government (2018), the European Digital Woman of the Year award (2016) and the Spanish National Computer Science Award (2016).

She has been named one of the top 11 Artificial Intelligence influencers worldwide by Pioneering Minds (2017), one of Spanish wonderful minds in technology by EL PAIS newspaper (2017), "an outstanding female director in technology" (El PAIS, 2012), one of "100 leaders for the future" (Capital, 2009) and one of the "40 youngsters who will mark the next millennium" (El PAIS, 1999).

Nuria firmly believes in the value of technology to improve the quality of people, both individually and collectively, and dedicates her professional life to achieving it.

Her passion is to improve people's quality of life, both individually and collectively, through technology. She is also passionate about scientific outreach. Hence, she regularly collaborates with the media (press, radio, TV) and gives non-technical talks about science and technology to broad audiences, and particularly to teenagers, with a special interest on girls. She has given talks to more than 10,000 adolescents, has contributed to the book "Digital natives do not exist" (Deusto, 2017) with the chapter "Digital scholars", has written articles for EL PAIS, The Guardian, TechCrunch among others and has been co -organizer of large congresses with thousands of attendees, such as the first TEDxBarcelona event dedicated to emerging education, the I and II International Congress on Artificial Intelligence and the I International Congress on Aging. Her talks on WIRED, TEDx and similar events have been viewed thousands of times. Twitter: @nuriaoliver



Can You Believe It? Security and Privacy Case Studies in Online Advertising, Misinformation, and Augmented Reality
Franziska Roesner | University of Washingtion

2020-11-24, 16:00 - 17:00
Virtual talk

Abstract:

People who use modern technologies are inundated with content and information from many sources, including advertisements on the web, posts on social media, and (looking to the future) content in augmented or virtual reality. While these technologies are transforming our lives and communications in many positive ways, they also come with serious risks to users’ security, privacy, and the trustworthiness of content they see: the online advertising ecosystem tracks individual users and may serve misleading or deceptive ads, social media feeds are full of potential mis/disinformation, and emerging augmented reality technologies can directly modify users’ perceptions of the physical world in undesirable ways. In this talk, I will discuss several lines of research from our lab that explore these issues from a broad computer security and privacy perspective, leveraging methodologies ranging from qualitative user studies to systematic measurement studies to system design and evaluation. What unites these efforts is a key question: what can (or do) users believe about the content they receive through our existing and emerging technologies, and how can we design platforms and ecosystems more robust to these risks?

--

Contact SWS Office for Zoom link details

Speaker's bio:

Franziska (Franzi) Roesner is an associate professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she co-directs the Security and Privacy Research Lab. Her research focuses broadly on computer security and privacy for end users of existing and emerging technologies. For example, her work has studied contexts including the web, smartphones, sensitive user groups (e.g., journalists, survivors of human trafficking), emerging augmented reality (AR) and IoT platforms, and online mis/disinformation. She is the recipient of an MIT Technology Review "Innovators Under 35" Award, an Emerging Leader Alumni Award from the University of Texas at Austin, a Google Security and Privacy Research Award, and an NSF CAREER Award. She serves on the USENIX Security and USENIX Enigma Steering Committees. She received her PhD from the University of Washington in 2014 and her BS from UT Austin in 2008. Her website is at https://www.franziroesner.com.



Intelligibility Throughout the Machine Learning Life Cycle
Jenn Wortman Vaughan | Microsoft Research NYC

2020-11-18, 15:00 - 15:00
Virtual talk

Abstract:

People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and the people affected by these systems—have at least a basic understanding of how they work. Yet what makes a system "intelligible" is difficult to pin down. Intelligibility is a fundamentally human-centered concept that lacks a one-size-fits-all solution. I will explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, ways of empirically testing whether intelligibility techniques achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.

--

Please contact Office for the Zoom details.

Speaker's bio:

Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. Her research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. In recent years, she has turned her attention to human-centered approaches to transparency, interpretability, and fairness in machine learning as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a handful of best paper awards. In her "spare" time, Jenn is involved in a variety of efforts to provide support for women in computer science; most notably, she co-founded the Annual Workshop for Women in Machine Learning, which has been held each year since 2006.



Experiments in Machine Behavior: Cooperating with and through machines
Iyad Rahwan | Max Planck Institute for Human Development

2020-11-11, 15:00 - 15:00
Virtual talk

Abstract:

Human cooperation is fundamental to the success of our species. But emerging machine intelligence poses new challenges to human cooperation. This talk explores two interrelated problems: How can we humans cooperate *through* machines--that is, agree among ourselves on how machines should behave? And how can we cooperate *with* machines--that is, convince self-interested machines to cooperate with us. The talk will then propose an interdisciplinary agenda for understanding and improving our human-machine ecology.

--

Please contact SWS Office for Zoom details

Speaker's bio:

Iyad Rahwan is a director of the Max Planck Institute for Human Development in Berlin, where he founded and directs the Center for Humans & Machines. He is also an honorary professor of Electrical Engineering and Computer Science at the Technical University of Berlin. Until June 2020, he was an Associate Professor of Media Arts & Sciences at the Massachusetts Institute of Technology (MIT). A native of Aleppo, Syria, Rahwan holds a PhD from the University of Melbourne, Australia.



Simple models for optimizing driver earnings in ride-sharing platforms
Evimaria Terzi | Boston University

2020-11-04, 16:00 - 01:00
Virtual talk

Abstract:

On-demand ride-hailing platforms like Uber and Lyft are helping reshape urban transportation, by enabling car owners to become drivers for hire with minimal overhead. Such platforms are a multi-sided market and offer a rich space for studies with socio-economic implications. In this talk I am going to address two questions:

1. In the absence of coordination, what is the best course of action for a self-interested driver that wants to optimize his earnings?

2. In the presence of coordination, is it possible to maximize social welfare objectives in an environment where the objectives of the participants (drivers, customers and the platform) are (often) misaligned?

We will discuss the computational problems behind these problems and describe simple algorithmic solutions that work extremely well in practice. We will demonstrate the practical strength of our approaches with well-designed experiments on novel datasets we collected from such platforms. --- Please contact Office for the Zoom details.

Speaker's bio:

Evimaria Terzi is a Professor of Computer Science at Boston University. Her work focuses on algorithmic problems in team formation, recommender systems and network applications. She joined BU in 2009 after being a Research Staff Member for two years at the IBM Almaden Research Center. She got her PhD in CS from the University of Helsinki (Finland), her MSc in CS from Purdue University (USA) and her BSc also in CS from the Aristotle University (Greece). Her research is funded by NSF as well as gifts from companies such as Microsoft, Google and Yahoo.



Diagnosing the data ecosystem
Katrina Ligett | Hebrew University of Jerusalem

2020-11-02, 09:00 - 10:00
Virtual talk

Abstract:

In this talk, we'll look together at some of the problems of today's data ecosystem, including issues of privacy, control, fairness, surveillance, and manipulation. We'll explore the ways in which various recent technologies can provide (partial) solutions to some of these woes, and we'll confront some of the limitations of technology. The discussion will expose a rich array of interesting research problems that bridge between computer science, law, economics, and beyond. -- Please contact Office for the Zoom details.

Speaker's bio:

Katrina Ligett is an Associate Professor of Computer Science at the Hebrew University of Jerusalem, where she is also a member of the Federmann Center for the Study of Rationality, and the head of the program on Internet & Society. She previously was an Assistant Professor of Computer Science and Economics at Caltech. Katrina currently serves on the executive committee of the ACM Special Interest Group on Economics and Computation (sigecom) and is on the editorial board for Transactions on Economics and Computation (TEAC). She is currently the Program Chair for the Symposium on Foundations of Responsible Computing (FORC) 2021, and Program Co-Chair for the International Conference on Algorithmic Learning Theory (ALT) 2021. Her research interests include data privacy, algorithmic game theory, algorithmic fairness, and machine learning theory.



Generalization bounds for rational self-supervised learning algorithms
Boaz Barak | Harvard University

2020-10-27, 16:00 - 17:00
Saarbrücken building E1 4, room 024 / simultaneous videocast to building , room / Meeting ID: Zoom Meeting, see link below

Abstract:

The generalization gap of a learning algorithm is the expected difference between its performance on the training data and its performance on fresh unseen test samples. Modern deep learning algorithms typically have large generalization gaps, as they use more parameters than the size of their training set. Moreover the best known rigorous bounds on their generalization gap are often vacuous.

In this talk we will see a new upper bound on the generalization gap of classifiers that are obtained by first using self-supervision to learn a complex representation of the (label free) training data, and then fitting a simple (e.g., linear) classifier to the labels. Such classifiers have become increasingly popular in recent years, as they offer several practical advantages and have been shown to approach state-of-art results.

We show that (under the assumptions described below) the generalization gap of such classifiers tends to zero as long as the complexity of the simple classifier is asymptotically smaller than the number of training samples. We stress that our bound is independent of the complexity of the representation that can use an arbitrarily large number of parameters. Our bound holds assuming that the learning algorithm satisfies certain noise-robustness (adding small amount of label noise causes small degradation in performance) and rationality (getting the wrong label is not better than getting no label at all) properties.  These conditions widely (and sometimes provably) hold across many standard architectures. We complement this result with an empirical study, demonstrating that our bound is non-vacuous for many popular representation-learning based classifiers on CIFAR-10 and ImageNet, including SimCLR, AMDIM and BigBiGAN.

The talk will not assume any specific background in machine learning, and should be accessible to a general mathematical audience. Joint work with Yamini Bansal and Gal Kaplun.

Speaker's bio:

-



Security and Privacy Guarantees in Machine Learning with Differential Privacy
Roxana Geambasu | Columbia University

2020-06-15, 16:00 - 17:00
Saarbrücken building E1 4, room Zoom

Abstract:

Abstract: Machine learning (ML) is driving many of our applications and life-changing decisions. Yet, it is often brittle and unstable, making decisions that are hard to understand or can be exploited. Tiny changes to an input can cause dramatic changes in predictions; this results in decisions that surprise, appear unfair, or enable attack vectors such as adversarial examples. Moreover, models trained on users' data can encode not only general trends from large datasets but also very specific, personal information from these datasets; this threatens to expose users' secrets through ML models or predictions. This talk positions differential privacy (DP) -- a rigorous privacy theory -- as a versatile foundation for building into ML much-needed guarantees of security, stability, and privacy. I first present PixelDP (S&P'19), a scalable certified defense against adversarial example attacks that leverages DP theory to guarantee a level of robustness against these attacks. I then present Sage (SOSP'19), a DP ML platform that bounds the cumulative leakage of secrets through models while addressing some of the most pressing challenges of DP, such as running out of privacy budget and the privacy-accuracy tradeoff. PixelDP and Sage are designed from a pragmatic, systems perspective and illustrate that DP theory is powerful but requires adaptation to achieve practical guarantees for ML workloads.

Speaker's bio:

Roxana Geambasu is an Associate Professor of Computer Science at Columbia University and a member of Columbia's Data Sciences Institute. She joined Columbia in Fall 2011 after finishing her Ph.D. at the University of Washington. For her work in cloud and mobile data privacy, she received: an Alfred P. Sloan Faculty Fellowship, an NSF CAREER award, a Microsoft Research Faculty Fellowship, several Google Faculty awards, a "Brilliant 10" Popular Science nomination, the Honorable Mention for the 2013 inaugural Dennis M. Ritchie Doctoral Dissertation Award, a William Chan Dissertation Award, two best paper awards at top systems conferences, and the first Google Ph.D. Fellowship in Cloud Computing



Formal Synthesis for Robots
Hadas Kress-Gazit | Cornell University

2020-05-25, 16:00 - 17:00
Kaiserslautern building G26, room online

Abstract:

In this talk I will describe how formal methods such as synthesis – automatically creating a system from a formal specification – can be leveraged to design robots, explain and provide guarantees for their behavior, and even identify skills they might be missing. I will discuss the benefits and challenges of synthesis techniques and will give examples of different robotic systems including modular robots, swarms and robots interacting with people.

Speaker's bio:

Hadas Kress-Gazit is an Associate Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University. She received her Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2008 and has been at Cornell since 2009. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics – automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms and synthesizes (pun intended) ideas from different communities such as robotics, formal methods, control, hybrid systems and computational linguistics. She received an NSF CAREER award in 2010, a DARPA Young Faculty Award in 2012, the Fiona Ip Li ’78 and Donald Li ’75 Excellence in teaching award in 2013, the senior faculty champion award in 2019, and the Kenneth A. Goldman ’71 Teaching Award in 2019. She lives in Ithaca with her partner and two kids.



New Perspectives on Old Graph Questions (Video Talk)
Danupon Nanonkai | KTH Stockholm

2020-04-06, 10:00 - 11:00
Saarbrücken building E1 4, room 024 / simultaneous videocast to building , room / Meeting ID: 0

Abstract:

In this talk, I will discuss an ambitious research program towards graph algorithms that work efficiently across many computational models. I will provide an overview of new results and techniques coming out of this research program. Results that will be discussed include edge connectivity (global minimum cut), vertex connectivity, and maximum matching. No background is required, except for some very basic graph algorithms.

Speaker's bio:

-



Quest for a unified theory of efficient optimization and estimation (Video Lecture)
David Steurer | ETH Zürich

2020-03-26, 14:00 - 15:00
Saarbrücken building E1 4, room Video

Abstract:

Non-convex and discrete optimization problems are at the heart of many algorithmic tasks that arise in machine learning and other computing applications. A promising approach to solve such problems with provable guarantees is the sum-of-squares (SOS) meta-algorithm, which has been discovered multiple times across different disciplines including control theory, proof complexity, and quantum information.

My collaborators and I show in a sequence of recent works that for a wide range of optimization and estimation problems, this meta-algorithm achieves the best known provable guarantees, often improving significantly over all previous methods. For example, for mixtures of spherical Gaussians, we obtain guarantees that improve exponentially over the previous best ones and approach, for the first time, the information-theoretic limits. Remarkably, the SOS meta-algorithm achieves these guarantees without being tailored to this specific problem.

Moreover, we prove that for a rich class of problems, the guarantees that SOS achieves are best possible with respect to a restricted but very powerful model of computation. This result leads to the strongest known concrete lower bounds for NP-complete problems.

Taken together these results point toward a unified theory for efficient optimization and estimation centered around SOS that could change how we think about efficient computation in general.

Speaker's bio:

-



*Remote Talk* Improve Operations of Data Center Networks with Physical-Layer Programmability
Yiting Xia | Facebook

2020-03-26, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Physical-layer programmability enables the network topology to be changed dynamically. In this talk, the speaker will make a case that cloud data center networks can be significantly easier to manage with physical-layer programmability. Three example network architectures will be shown as different use cases of this approach. ShareBackup enhances reliability through sharing backup switches efficiently network-wide, where a backup switch can be brought online instantaneously to recover from failures. Flat-tree solves the problem of choosing the right network topology for different cloud services by dynamically changing topological clustering characteristics of the network. OmniSwitch is a universal building block of data center networks that supports automatic device wiring and easy device maintenance. At the end of the talk, the speaker will briefly introduce an ongoing follow-up research that extends physical-layer programmability from data center networks to backbone networks.

Speaker's bio:

Yiting Xia is a research scientist at Facebook, where she design and implement the Facebook network infrastructures. Before joining Facebook, she got the PhD degree from Rice University. Her past and ongoing research is on building novel networked systems using physical-layer programmability to enable faster transmission, better fault tolerance, and easier management.



*Remote Talk* Learning efficient representations for image and video understanding
Yannis Kalantidis | Facebook AI

2020-03-18, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Two important challenges in image and video understanding are designing more efficient deep Convolutional Neural Networks and learning models that are able to achieve higher-level understanding. In this talk, I will present some of my recent works towards tackling these challenges. Specifically, I will introduce the Octave Convolution [ICCV 2019], a plug-and-play replacement for the convolution operator that exploits the spatial redundancy of CNN activations and can be used without any adjustments to the network architecture. I will also present the Global Reasoning Networks [CVPR 2019], a new approach for reasoning over arbitrary sets of features of the input, by projecting them from a coordinate space into an interaction space where relational reasoning can be efficiently computed.  The two methods presented are complementary and achieve state-of-the-art performance on both image and video tasks. Aiming for higher-level understanding, I will also present our recent works on vision and language modeling, specifically our work on learning state-of-the-art image and video captioning models that are also able to better visually ground the generated sentences with [CVPR 2019] or without [arXiv 2019] explicit localization supervision. The talk will conclude with current research and a brief vision for the future.

Speaker's bio:

Yannis Kalantidis was a research scientist at Facebook AI in California for the last three years. He got his PhD on large-scale visual search and clustering from the National Technical University of Athens in 2014. He was a postdoc and research scientist at Yahoo Research in San Francisco for from 2015 until 2017, leading the visual similarity search project at Flickr and participated in the Visual Genome dataset efforts with Stanford. At Facebook Research he was part of the video understanding group, conducting research on representation learning, video understanding and modeling of vision and language. He is further leading the Computer Vision for Global Challenges Initiative (cv4gc.org) that has organized impactful workshops at top venues like CVPR and ICLR. Personal website:  https://www.skamalas.com/



*Remote Talk* Fairness in machine learning
Niki Kilbertus | University of Cambridge and MPI for Intelligent Systems, Tübingen

2020-03-16, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Machine learning increasingly supports consequential decisions  in health, lending, criminal justice or employment that affect the wellbeing of individual members or entire groups of our society. Such applications raise concerns about fairness, privacy violations and the long-term consequences of automated decisions in a social context. After a brief introduction to fairness in machine learning, I will highlight concrete settings with specific fairness or privacy ramifications and outline approaches to address them. I will conclude by embedding these examples into a broader context of socioalgorithmic systems and the complex interactions therein.

Speaker's bio:

Niki Kilbertus is a final-year PhD student in the Cambridge-Tübingen program co-supervised by Bernhard Schölkopf and Carl Rasmussen. He is primarily interested in building socially beneficial, robust, and theoretically substantiated machine learning systems. Prior, Niki studied physics and mathematics at the University of Regensburg with research visits at Harvard and Stanford.



*Remote Talk* Democratizing Error-Efficient Computing
Radha Venkatagiri | University of Illinois at Urbana-Champaign

2020-03-12, 13:00 - 14:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

We live in a world where errors in computation are becoming ubiquitous and come from a wide variety of sources -- from unintentional soft errors in shrinking transistors to deliberate errors introduced by approximation or malicious attacks. Guaranteeing perfect functionality across a wide range of future systems will be prohibitively expensive. Error-Efficient computing offers a promising solution by allowing the system to make controlled errors. Such systems can be considered as being error-efficient: they only prevent as many errors as they need to for an acceptable user experience. Allowing the system to make errors can lead to significant resource (time, energy, bandwidth, etc.) savings. Error-efficient computing can transform the way we design hardware and software to exploit new sources of compute efficiency; however, excessive programmer burden and a lack of principled design methodologies have thwarted its adoption. My research addresses these limitations through foundational contributions that enable the adoption of error-efficiency as a first-class design principle by a variety of users and application domains. In this talk, I will show how my work (1) enables an understanding of how errors affect program execution by providing a suite of automated and scalable error analysis tools, (2) demonstrates how such an understanding can be exploited to build customized error-efficiency solutions targeted to low-cost hardware resiliency and approximate computing and (3) develops methodologies for principled integration of error-efficiency into the software design workflow.Finally, I will discuss future research avenues in error-efficient computing with multi-disciplinary implications in core disciplines (programming languages, software engineering, hardware design, systems) and emerging application areas (AI, VR, robotics, edge computing).

Speaker's bio:

Radha is a doctoral candidate in Computer Science at the University of Illinois at Urbana-Champaign. Her research interests lie in the area of Computer Architecture and Systems. Radha’s dissertation work aims to build efficient computing systems that redefine "correctness"as producing results that are good enough to ensure an acceptable user experience. Radha’s research work has been nominated to the IBM Pat Goldberg Memorial Best Paper Award for 2019. She was among 20 people invited to participate in an exploratory workshop on error-efficient computing systems initiated by the Swiss National Science Foundation and is one of 200 young researchers in Math and Computer Science worldwide to be selected for the prestigious 2018 Heidelberg Laureate Forum. Radha was selected for the Rising Stars in EECS and the Rising Stars in Computer Architecture (RISC-A) workshops for the year 2019. Before joining the University of Illinois, Radha was a CPU/Silicon validation engineer at Intel where her work won a divisional award for key contributions in validating new industry standard CPU



Search-based automated program repair and testing
Shin Hwei Tan | Southern University of Science and Technology, Shenzhan, China

2020-03-10, 10:00 - 11:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Software testing remains the key technique for detection of functionality errors and vulnerabilities. Automated test generation typically involves searching in a huge input domain. Similarly many other software engineering activities such as software debugging and automated repair can be seen as large search problems. Despite advances in constraint solving and symbolic reasoning, major practical challenges in the use of automated testing and automated program repair, as opposed to manual testing and manual repair include (a) the lack of specifications, leading to problems such as overfitting patches and test-suite bias, and (b) learning curve in using these automated tools, leading to the lack of deployment. In this talk, I will present pragmatic solutions to address these challenges. Lack of specifications can be alleviated by implicitly exploiting lightweight specifications from comment-code inconsistencies, past program versions, or other program artifacts. Lack of deployment can be alleviated via systematic approaches towards collaborative bug finding. Such a collaborative approach can contribute to automated program repair as well as testing.

Speaker's bio:

Shin Hwei Tan is a tenure-track Assistant Professor at Southern University of Science and Technology in Shenzhen, China. She obtained her PhD degree from National University of Singapore and her B.S (Hons) and MSc degree from University of Illinois at Urbana-Champaign, USA. Her main research interests are in automated program repair, software testing and search-based software engineering. She received the David J. Kuck Outstanding MSc Thesis Award and a Google Anita Borg Memorial Scholarship. She has served as PCs for several conferences (i.e., ICSE 2020, ASE 2019, FSE Tool Demos, ICSE 2019 SRC, ASE 2020, FSE 2020), and workshops (i.e., Genetic Improvement workshop at GECCO 2018, and the First International Workshop On Intelligent Bug Fixing (IBF) 2019). She also co-organized the 6th International Workshop on Genetic Improvement and proposed the 1st International Workshop on Automated Program Repair.



Learning by exploration in an unknown and changing environment
Qingyun Wu | University of Virginia

2020-02-27, 14:00 - 15:00
Saarbrücken building E1 5, room SB 029 / simultaneous videocast to Kaiserslautern building G26, room KL 111 / Meeting ID: 6312

Abstract:

Learning is a predominant theme for any intelligent system, humans or machines. Moving beyond the classical paradigm of learning from past experience, e.g., supervised learning from given labels, a learner needs to actively collect exploratory feedback to learn the unknowns. Considerable challenges arise in such a setting, including sample complexity, costly and even outdated feedback.

In this talk, I will introduce our themed efforts on developing solutions to efficiently explore the unknowns and dynamically adjust to the changes through exploratory feedback. Specifically, I will first present our studies in leveraging special problem structures for efficient exploration. Then I will present our work on empowering the learner to detect and adjust to potential changes in the environment adaptively. Besides, I will also highlight the impact our research has generated in top-valued industry applications, including online learning to rank and interactive recommendation.

Speaker's bio:

Qingyun Wu is a Ph.D. candidate in the Department of Computer Science, University of Virginia. Her research focuses on interactive online learning, including bandit algorithms, reinforcement learning, and their applications in real-world problems. Her research has appeared in multiple top-tier venues, including SIGIR, WWW, KDD, and NeurIPS; and her algorithms have been evaluated in several commercial systems in industry (including Yahoo news recommendation and Snapchat lens recommendation). Qingyun received multiple prestigious awards from the University of Virginia for her excellence in research, including the Virginia Engineering Foundation Fellowship and the Graduate Student Award for Outstanding Research. Her recent work on online learning to rank won the Best Paper Award of SIGIR'2019. She was also selected as one of the Rising Stars in EECS 2019.



Compactness in Cryptography
Giulio Malavolta | UC Berkeley and CMU

2020-02-25, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

The communication complexity of secure protocols is a fundamental question of the theory of computation and has important repercussions in the development of real-life systems. As an example, the recent surge in popularity of cryptocurrencies has been enabled and accompanied by advancements in the construction of more compact cryptographic machinery. In this talk we discuss how to meet the boundaries of compactness in cryptography and how to exploit succinct communication to construct systems with new surprising properties. Specifically, we consider the problem of computing functions on encrypted data: We show how to construct a fully-homomorphic encryption scheme with message-to-ciphertext ratio (i.e. rate) of 1 – o(1), which is optimal. Along the way, we survey the implication of cryptographic compactness in different contexts, such as proof systems, scalable blockchains, and fair algorithms.

Speaker's bio:

Giulio Malavolta is currently a postdoc with a joint appointment at UC Berkeley and CMU. Prior to that he was a research fellow at the Simons Institute for the Theory of Computing and he completed his PhD at Friedrich-Alexander Universitat Erlangen-Nurnberg in 2019. His research interest spans the theory and the applications of cryptography and computer security and his work was published in leading venues in cryptography (CRYPTO, EUROCRYPT), theory of computation (FOCS) , and system security (S&P, CCS, NDSS).



Software Testing as Species Discovery
Marcel Böhme | Monash University

2020-02-10, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

A fundamental challenge of software testing is the statistically well-grounded extrapolation from program behaviors observed during testing. For instance, a security researcher who has run the fuzzer for a week has currently no means (i) to estimate the total number of feasible program branches, given that only a fraction has been covered so far, (ii) to estimate the additional time required to cover 10% more branches (or to estimate the coverage achieved in one more day, resp.), or (iii) to assess the residual risk that a vulnerability exists when no vulnerability has been discovered. Failing to discover a vulnerability, does not mean that none exists—even if the fuzzer was run for a week (or a year). Hence, testing provides no formal correctness guarantees.

In this talk, I establish an unexpected connection with the otherwise unrelated scientific field of ecology, and introduce a statistical framework that models Software Testing and Analysis as Discovery of Species (STADS). For instance, in order to study the species diversity of arthropods in a tropical rain forest, ecologists would first sample a large number of individuals from that forest, determine their species, and extrapolate from the properties observed in the sample to properties of the whole forest. The estimation (i) of the total number of species, (ii) of the additional sampling effort required to discover 10% more species, or (iii) of the probability to discover a new species are classical problems in ecology. The STADS framework draws from over three decades of research in ecological biostatistics to address the fundamental extrapolation challenge for automated test generation. Our preliminary empirical study demonstrates a good estimator performance even for a fuzzer with adaptive sampling bias—AFL, a state-of-the-art vulnerability detection tool. The STADS framework provides statistical correctness guarantees with quantifiable accuracy.

Speaker's bio:

Marcel Böhme is 2019 ARC DECRA Fellow and Lecturer (Asst Prof) at the Faculty of IT at Monash University, Australia. He completed his PhD at the National University of Singapore advised by Prof Abhik Roychoudhury in 2014. It followed a postdoctoral stint at the CISPA-Helmholtz Zentrum Saarbrücken with Prof. Andreas Zeller and a role as senior research fellow at the TSUNAMi Security Research Centre in Singapore. Marcel's research is focused on automated vulnerability discovery, program analysis, testing, debugging, and repair of large software systems, where he investigates practical topics such as efficiency, scalability, and reliability of automated techniques via theoretical and empirical analysis. His high-performance fuzzers have discovered 100+ bugs in widely used software systems, more than 60 of which are security-critical vulnerabilities that are registered as CVEs at the US National Vulnerability Database.



Hybrid optimization techniques for multi-domain coupling in cyber-physical systems design
Debayan Roy | TU Munich

2020-02-07, 15:00 - 16:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

In a cyber-physical system (CPS), a physical process is controlled by software running on a cyber platform. And there exists a strong interaction between the physical dynamics, the control software, the sensors and actuators, and the cyber resources (i.e., computation, communication, and memory resources). These systems are common in domains such as automotive, avionics, health-care, smart manufacturing, smart grid, among others. The state-of-practice is to design CPSs using a disjoint set of tools handling different design domains. Such a design methodology has proved to be inefficient with respect to resource usage and performance. In this talk, I will discuss how models from different engineering disciplines can be integrated to design efficient cyber-physical systems. In particular, I will show two use-cases. First, I will talk about a multi-resource platform consisting of high- and low-quality resources. Correspondingly, I will show that a cost-efficient switching control strategy can be designed exploiting heterogeneous resources and by effectively managing the interplay between control theory, scheduling and formal verification. Second, I will talk about the cyber-physical battery management systems (BMS) for high-power battery packs. I will specifically discuss the problem of cell balancing which is an important task of BMS. I will show how integrated modeling of the individual cells, battery architecture, control circuits, and cyber architecture, can lead to energy- and time-efficient scheduling for active cell balancing.

Speaker's bio:

Debayan Roy is a final-year PhD student in the Department of Electrical and Computer Engineering at the Technical University of Munich, where he is being advised by Samarjit Chakraborty. He obtained his Bachelor’s degree from Jadavpur University in Electrical Engineering with First Class Honors, and his Master's degree in Communications Engineering from TU Munich with a High Distinction. His research interests are in the area of modeling, design, and verification of cyber-physical systems. His work has been recognized with the Best Paper Award at RTCSA 2017 and two Best Paper Nominations at DATE 2019 and DATE 2020, respectively.



Designing responsible information systems
Asia J. Biega | Microsoft Research Montreal, Canada

2020-02-07, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Information systems have the potential to enhance or limit opportunities when mediating user interactions. They also have the potential to violate privacy by accumulating observational data into detailed user profiles or by exposing people in sensitive contexts. This talk will cover measures and mechanisms we have proposed for mitigating various threats to user well-being in online information ecosystems. In particular, I am going to focus on two contributions in the areas of algorithmic fairness and privacy. The first contribution demonstrates how to operationalize the notion of equity in the context of search systems and how to design optimization models that achieve equity while accounting for human cognitive biases. The second ties our empirical work on profiling privacy and data collection to concepts in data protection laws. Finally, I will discuss the necessity for a holistic approach to responsible technology, from studying different types of harms, through development of different types of interventions, up to taking a step back and refusing technologies that cannot be fixed by technical tweaks.

Speaker's bio:

Asia J. Biega is a postdoctoral researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) Group at Microsoft Research Montréal. A common theme in her research is protecting user rights and well-being. Through interdisciplinary collaborations, she designs ethically, socially, and legally responsible information and social computing systems and studies how they interact with and influence their users. Before joining Microsoft Research, she completed her PhD summa cum laude at the Max Planck Institute for Informatics and Saarland University. Her doctoral work focused on the issues of privacy and fairness in search systems. She has published her work in leading information retrieval and data mining venues, and has been serving on the program committees of SIGIR, KDD, FAT*, and the senior program committee of SIGIR. Beyond academia, her perspectives and methodological approaches are informed by an industrial experience, including work on privacy infrastructure at Google and consulting Microsoft product teams on issues related to FATE and privacy.



veribetrfs: Verification as a Practical Engineering Tool
Jon Howell | VMware Research, Bellevue, WA, USA

2020-01-17, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: VMR 6312

Abstract:

Recent progress in systems verification have shown that verification techniques can scale to thousands of lines. It is time to ask whether verification can displace testing as an effective path to software correctness. The veribetrfs project is developing a verified high-performance storage system. A primary goal of the project is to reduce verification methodology to engineering practice. Veribetrfs is developed using the Iron★ methodology, a descendent of the Ironclad and IronFleet projects. So far, we have a key-value store with 100k iops performance and strong guarantees against data loss. This talk will give an overview of the methodology and describe how we have enhanced it in veribetrfs. 

Speaker's bio:

Jon Howell is a distributed systems researcher with a focus on correctness and security. He was a principal contributor to the IronFleet verified distributed systems project, the Ironclad verified secure server project, the Embassies secure client computing project, and the FARSITE decentralized file system.



Computer-Aided Programming Across Software Stack
Işıl Dillig | University of Texas at Austin

2019-12-16, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Program synthesis techniques aim to generate executable programs from some high-level of expression on user intent, such as logical specifications, examples, or naive reference implementation. This talk will survey different flavors of program synthesis and their applications across the entire software stack, ranging from computer end-users all the way to systems programmers. We will also illustrate how program synthesis is useful for addressing different concerns in software development, including functionality, correctness, performance, and security.

Speaker's bio:

Isil Dillig (PhD, Stanford) is an Associate Professor of Computer Science at the University of Texas at Austin where she leads the UToPiA research group. Her main research area is programming languages, with a specific emphasis on static analysis, verification, and program synthesis. The techniques developed by her group aim to make software systems more reliable, secure, and easier to build. Dr. Dillig is a Sloan Fellow and a recipient of the NSF CAREER award. Her publications have received distinguished paper awards at top-tier conferences, such as PLDI, OOPSLA, ETAPS, and others. She has also served as co-chair of international conferences and workshops, such as, most recently, the 2019 Computer Aided Verification (CAV) conference.



WebAssembly: Mechanisation, Security, and Concurrency
Conrad Watt | University of Cambridge

2019-12-12, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

WebAssembly is the first new language to be introduced to the Web ecosystem in over 20 years. Its official specification is given as a formal semantics, making the language a perfect target for further applications of formal methods. This talk highlights recent work which builds on this formal semantics, and discusses the ongoing development of WebAssembly's relaxed memory model, which is complicated by the language's inter-operation with JavaScript.

Speaker's bio:

Conrad Watt is a PhD student at the University of Cambridge, supervised by Peter Sewell. His work focusses on the WebAssembly language, and he serves as an Invited Expert to the WebAssembly Working Group, assisting with the development of the language's relaxed memory model. He holds a Google Doctoral Fellowship in Programming Technology and Software Engineering.



Prusti – Deductive Verification for Rust
Alex Summers | ETH Zurich

2019-12-03, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Producing reliable systems software is a major challenge, plagued by the ubiquitous problems of shared mutable state, pointer aliasing, dynamic memory management, and subtle concurrency issues such as race conditions; even expert programmers struggle to tame the wide variety of reasons why their programs may not behave as they intended. Formal verification offers potential solutions to many of these problems, but typically at a very high price: the mathematical techniques employed are highly-complex, and difficult for even expert researchers to understand and apply.

The relatively-new Rust programming language is designed to help with the former problem: a powerful ownership type system requires programmers to specify and restrict their discipline for referencing heap locations, providing in return the strong guarantee (almost – the talk, and Rustbelt!) that code type-checked by this system will be free from dangling pointers, unexpected aliasing, race conditions and the like. While this rules out a number of common errors, the question of whether a program behaves as intended remains.

In this talk, I’ll give an overview of the Prusti project, which leverages Rust’s type system and compiler analyses for formal verification. By combining the rich information available about a type-checked Rust program with separate user-specification of intended behaviour, Prusti enables a user to verify functional correctness of their code without interacting with a complex program logic; in particular, specifications and all interactions with our implemented tool are at the level of abstraction of Rust expressions.

Speaker's bio:

Alex Summers works broadly on program verification techniques and tools, specialising in imperative concurrent languages, and modular deductive verification. Alex obtained his PhD from Imperial College London, worked as a postdoc at Imperial College London and ETH Zurich, and then as a Senior Researcher (Oberassistent) at ETH Zurich. He currently coordinates the Viper and Prusti research projects. In March 2020 he will start a new position as Associate Professor at the University of British Columbia.



Dealing with Epidemics under Uncertainty
Jessica Hoffmann | University of Texas at Austin

2019-11-04, 10:30 - 11:30
Saarbrücken building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Epidemic processes can model anything that spreads. As such, they are a useful tool for studying not only human diseases, but also network attacks, chains of activation in the brain, the propagation of real or fake news, the spread of viral tweets, and other processes. In this talk, we investigate epidemics spreading on a graph in the presence of various forms of uncertainty. We present in particular a result about controlling the spread of an epidemic when there is uncertainty about who exactly is infected. We show first that neither algorithms nor results are robust to uncertainty. In other words, uncertainty fundamentally changes how we must approach epidemics on graphs. We also present two related results about learning the graph underlying an epidemic process when there is uncertainty about when people were infected or what infected them.

Speaker's bio:

Jessica Hoffmann is a 5th-year PhD student at the University of Texas at Austin, working with Prof. Constantine Caramanis. Her areas of interest include epidemics, applied probability, graph algorithms, combinatorics, and robustness. She received her master's degree in Applied Mathematics from ENS Paris, and most recently was awarded second place in the George E. Nicholson Student Paper Competition. During her graduate studies, she revived and led for two years the Graduate Women in Computing association at UT Austin, which is still active to this day.



Are We Susceptible to Rowhammer? An End-to-End Methodology for Cloud Providers
Stefan Saroiu | Mircosoft Research, Redmond

2019-10-07, 10:30 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113 / Meeting ID: 6312

Abstract:

Cloud providers are nervous about recent research showing how Rowhammer attacks affect many types of DRAM including DDR4 and ECC-equipped DRAM.  Unfortunately, cloud providers lack a systematic way to test the DRAM present in their servers for the threat of a Rowhammer attack. Building such a methodology needs to overcome two difficult challenges: (1) devising a CPU instruction sequence that maximizes the rate of DRAM row activations on a given system, and (2) determining the adjacency of rows internal to DRAM. This talk will present an end-to-end methodology that overcomes these challenges to determine if cloud servers are susceptible to Rowhammer attacks. With our methodology, a cloud provider can construct worst-case testing conditions for DRAM.

We used our methodology to create worst-case DRAM testing conditions on the hardware used by a major cloud provider for a recent generation of its servers. Our findings show that none of the instruction sequences used in prior work to mount Rowhammer attacks create worst-case DRAM testing conditions. Instead, we construct an instruction sequence that issues non-explicit load and store instructions. Our new sequence leverages microarchitectural side-effects to ``hammer'' DRAM at a near-optimal rate on modern Skylake platforms. We also designed a DDR4 fault injector capable of reverse engineering row adjacency inside a DRAM device. When applied to our cloud provider's DIMMs, we find that rows inside DDR4 DRAM devices do not always follow a linear map.

Joint work with Lucian Cojocar (VU Amsterdam), Jeremie Kim, Minesh Patel, Onur Mutlu (ETH Zurich), Lily Tsai (MIT), and Alec Wolman (MSR)

Speaker's bio:

Stefan Saroiu is a researcher in the Mobility and Networking Research group at Microsoft Research (MSR) in Redmond. Stefan's research interests span many aspects of systems and networks although his most recent work focuses on systems security. Stefan takes his work beyond publishing results. With his colleagues at MSR, he designed and built (1) the reference implementation of a software-based Trusted Platform Module (TPM) used in millions of smartphones and tablets, and (2) Microsoft Embedded Social, a cloud service aimed at user engagement in mobile apps that has 20 million users. Before joining MSR in 2008, Stefan spent three years as an Assistant Professor at the University of Toronto, and four months at Amazon.com as a visiting researcher where he worked on the early designs of their new shopping cart system (aka Dynamo). Stefan is an ACM Distinguished Member.



Toward Cognitive Security
Claude Castelluccia | InRIA

2019-10-02, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Online services, devices or secret services are constantly collecting data and meta-data from users. This data collection is mostly  used to target users or customise their services. However, as illustrated by the Cambridge Analytica case, data and technologies are more and  more used to manipulate, influence or shape people's opinions online, i.e. to "hack" our brains. In this context, it is urgent to develop the field of  "Cognitive security" in order to better comprehend these attacks and provide counter-measures.  This talk will introduce the concept of "Cognitive security". We will explore the different types of cognitive attacks and discuss possible research directions.

Speaker's bio:

Claude Castelluccia, Research Director, Inria, head of the Privatics Group, co-founder of the UGA Data and cybersecurity institutes, member  of the Grenoble AI institute (MAIA). He is teaching at the University Grenoble Alps (CS and Law departments), Skema business school and SciencePo Paris. His current research interests include Data privacy, Surveillance, Cognitive security and Trusted and Ethical Algorithmic Decision Systems. https://team.inria.fr/privatics/claude-castelluccia/



Human-Centered Design and Data Science for Good
Maria Rauschenberger | Universitat Pompeu Fabra

2019-09-30, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112 / Meeting ID: 9312

Abstract:

How can we make better applications for social impact issues? For example, the combination of Human-Centered Design (HCD) and Data Science (DS) can be the answer to avoid biases in the collection of data with online-experiments and the analysis of small data. This presentation shows how we combine HCD and DS to design applications and analyze the collected data for Good.  We will focus mainly on the project: "Early screening of dyslexia using a language-independent content game and machine learning". With our two designed games (MusVis and DGames), we collected data sets (313 and 137 participants) in different languages (mainly Spanish and German) and evaluated them with machine learning classifiers. For MusVis, we mainly use content that refers to one single acoustic or visual indicator, while DGames content refers to generic content related to various indicators. Our results open the possibility of low-cost and early screening of dyslexia through the Web. In this talk, we will further address the techniques used from HCD and DS to reach these results. 

Speaker's bio:

Maria Rauschenberger started her research by defining and integrating Usability and User Experience into different contexts in 2010. Her Ph.D. topic focuses on the early screening of dyslexia using a language-independent content game and machine learning. Her thesis supervisors are Prof. Dr. Ricardo Baeza-Yates and Prof. Dr. Luz Rello. Since January 2016, Maria is a member of the Web Science and Social Computing Group at the Universitat Pompeu Fabra in Barcelona, chaired by Carlos Castillo. Her excellent research has been awarded three times in a row with the fem:talent Scholarship from the University of Applied Science Emden/Leer, besides she received the prestigious German Reading Award in 2017. Her current research interest is about how to solve social issues with computer science techniques.



Accelerating Network Applications with Stateful TCP Offloading
YoungGyoun Moon | KAIST

2019-09-24, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

The performance of modern key-value servers or layer-7 load balancers often heavily depends on the efficiency of the underlying TCP stack. Despite numerous optimizations such as kernel-bypassing and zero-copying, performance improvement for TCP applications is fundamentally limited due to the protocol conformance overhead for compatible TCP operations.

In this talk, I will introduce AccelTCP, a hardware-assisted TCP stack architecture that harnesses programmable network interface cards (NICs) as a TCP protocol accelerator. AccelTCP can offload complex TCP operations such as connection setup and teardown completely to NIC, which frees a significant amount of host CPU cycles for application processing. In addition, for layer-7 proxies, it supports running connection splicing on NIC so that the NIC relays all packets of the spliced connections with zero DMA overhead. We showcase the effectiveness of AccelTCP with two real-world applications: (1) Redis, a popular in-memory key-value store, and (2) HAProxy, a widely-used layer-7 load balancer. Our evaluation shows that AccelTCP improves their performance by 2.3x and 11.9x, respectively.

Speaker's bio:

YoungGyoun Moon is a Ph.D. candidate in KAIST under supervision of Prof. KyoungSoo Park. His research interests broadly lie in networked systems, including host networking stack, middleboxes, and programmable dataplane. He is a recipient of USENIX NSDI Best Paper Award in 2017. 



Synthesis from within: implementing automated synthesis inside an SMT solver
Cesare Tinelli | University of Iowa

2019-09-16, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Kaiserslautern building E1 5, room 029 / Meeting ID: 6312

Abstract:

Recent research in automated software synthesis from specifications or observations has leveraged the power of SMT solvers in order to explore the space of synthesis conjectures efficiently. In most of this work, synthesis techniques are built around a backend SMT solver which is used as a black-box reasoning engine. In this talk, I will describe a successful multiyear research effort by the developers of the SMT solver CVC4 that instead incorporates synthesis capabilities directly within the solver, and the discuss the advances in performance and scope made possible by this approach.

Speaker's bio:

Cesare Tinelli received a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1999 and is a F. Wendell Miller Professor at the University of Iowa. His main research interests are in automated reasoning and their applications in formal methods. He has done seminal work in Satisfiability Modulo Theories (SMT), a subfield of automated reasoning he helped establish through his research and service activities. His research has appeared in more than 80 refereed publications and has been funded both by governmental agencies (AFOSR, AFRL, DARPA, NASA, NSF, and ONR) and corporations (Amazon, Intel, General Electric, Rockwell Collins, and United Technologies). He leads the development of the Kind 2 model checker and co-leads the development of the widely used and award winning CVC4 SMT solver. He is a founder and coordinator of the SMT-LIB initiative, an international effort aimed at standardizing benchmarks and I/O formats for SMT solvers. He is an associate editor of the Journal of Automated Reasoning and a co-founder of the SMT workshop series and the Midwest Verification Day series. He has served in the program committee of more than 70 automated reasoning and formal methods conferences and workshops, as well as the steering committee of CADE, ETAPS, FTP, FroCoS, IJCAR, and SMT. He was the PC chair of FroCoS’11 and a PC co-chair of TACAS’15. He has given invited talks or tutorials at CADE, CAV, ETAPS, FroCoS, HVC, NSV, TABLEAUX, VSTTE, and WoLLIC.



Modeling and Individualizing Learning in Computer-Based Environments
Tanja Käser | Stanford University

2019-08-21, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112 / Meeting ID: 6312

Abstract:

Learning technologies are becoming increasingly important in today's education. This includes game-based learning and simulations, which produce high volume output, and MOOCs (massive open online courses), which reach a broad and diverse audience at scale. The users of such systems often are of very different backgrounds, for example in terms of age, prior knowledge, and learning speed. Adaptation to the specific needs of the individual user is therefore essential. In this talk, I will present two of my contributions on modeling and predicting student learning in computer-based environments with the goal to enable individualization. The first contribution introduces a new model and algorithm for representing and predicting student knowledge. The new approach is efficient and has been demonstrated to outperform previous work regarding prediction accuracy. The second contribution introduces models, which are able to not only take into account the accuracy of the user, but also the inquiry strategies of the user, improving prediction of future learning. Furthermore, students can be clustered into groups with different strategies and targeted interventions can be designed based on these strategies. Finally, I will also describe lines of future research.

Speaker's bio:

Tanja Käser is a senior data scientist at the Swiss Data Science Center (SDSC). Before joining the SDSC, she was as postdoctoral researcher at Stanford University. Tanja also worked as a postdoctoral researcher at ETH Zurich and as a consultant for Disney Research Zurich and Dybuster AG. She received her PhD in Computer Science from ETH Zurich; her thesis was distinguished with the Fritz Kutter Award of ETH Zurich. Tanja works in the field of artificial intelligence in education and is especially interested in modeling and predicting student thinking and learning to provide optimal computer-based learning environments.



Computer Science for Numerics
Martin Ziegler | KAIST

2019-07-19, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Since introduction of the IEEE 754 floating point standard in 1985, numerical methods have become ubiquitous --- and increasingly sophisticated. With growing code complexity of numerical libraries grows the need for rigorous Software Engineering methodology: as provided by Computer Science and state of the art regarding digital processing of discrete data, but lacking in the continuous realm. We apply, adapt, and extend the classical concepts --- specification, algorithmics, analysis, complexity, verification --- from discrete bit strings, integers, graphs etc. to real numbers, converging sequences, smooth/integrable functions, bounded operators, and compact subsets: A new paradigm formalizes mathematical structures as continuous Abstract Data Types with rigorous Turing-computable semantics but without the hassle of actual Turing machines.

Speaker's bio:

-



Design Problems: Trustworthy Smart Devices and 3D Printed Lace
Mary Baker | HP Labs in Palo Alto

2019-07-15, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

A growing number of domestic spaces incorporate products that collect data from cameras, microphones and other sensors, leading to privacy concerns. In this talk I report on two user studies performed to learn about perceptions of privacy and trust for sensor-enabled, connected devices such as smart home assistants. The study results suggest that users are more likely to trust devices with materially representative privacy status indicators. This means that the indicators themselves are part of what determines what sensing can take place. I will describe how we have applied the study results to the design of current devices and what the implications are for the physical design of future smart devices.

Time permitting, I will also talk about my other current passion -- design for additive manufacturing – and what researchers can do to ensure we reach the vastly exciting potential of this method of production. I will bring exotic 3D printed parts to help demonstrate my points.

Speaker's bio:

Mary Baker is a senior technologist at HP Inc. in Palo Alto. Her research interests cover a broad range of areas where predicting and solving problems tangibly improves the experience people have with technology. Her research topics include mobile systems and applications, physical affordances for IoT privacy, digital preservation, authentication, and design and workflow for additive manufacturing. Before joining HP in 2003 she was on the faculty of the computer science department at Stanford University where she led the MosquitoNet and Mobile People Architecture projects and graduated 7 Ph.D. students. She received a Sloan Foundation Fellowship, an Okawa Foundation Grant, and an NSF CAREER Award. She is a founding member of the editorial board for IEEE Pervasive Computing, for which she also writes the popular "Notes from the Community" column. She received an A.B. in Mathematics and an M.S. and Ph.D. in Computer Science, all from the University of California at Berkeley.



Automated Program Repair
Abhik Roychoudhury | National University of Singapore

2019-07-08, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Automated program repair is an emerging and exciting field of research, which allows for automated rectification of errors and vulnerabilities. The use of automated program repair can be myriad, such as (a) improving programmer productivity (b) automated fixing of security vulnerabilities as they are detected, (c) self-healing software for autonomous devices such as drones, as well as (d) use of repair in introductory programming education by grading and providing hints for programming assignments. One of the key technical challenges in achieving automated program repair, is the lack of formal specifications of intended program behavior. In this talk, we will conceptualize the use of symbolic execution approaches and tools for extracting such specifications. This is done by analyzing a buggy program against selected tests, or against reference implementations. Such specification inference capability can be combined with program synthesis techniques to automatically repair programs. The capability of specification inference also serves a novel use of symbolic execution beyond verification and navigation of large search spaces. Automated program repair via symbolic execution goes beyond search-based approaches which attempt to lift patches from elsewhere in the program. Such an approach can construct "imaginative" patches, serves as a test-bed for the grand- challenge of automated programming, and contributes to the vision of trustworthy self-healing software. Towards the end of the talk, we can put the research on automated repair in light of the overall practice of software security, by sharing some experiences gained at the Singapore Cyber-security Consortium.

Speaker's bio:

Abhik Roychoudhury is a Professor of Computer Science at the National University of Singapore. He is the Director of the National Satellite of Excellence in Trustworthy Software Systems at Singapore (2019-23). He helped establish, and is leading the Singapore Cyber-security Consortium, which is a consortium of more than 40 companies engaging with academia for research, translation and collaboration in cyber-security. His research focuses on software testing and analysis, software security and trust-worthy software construction. He has advised on secure and smart cyber- space in different capacities, including being an industry advisory board member of the London Office for Rapid Cyber-security Advancement (LORCA) since 2018. He has been a keynote speaker at several conferences, and serves in the Steering Committee of ACM International Symposium on Software Testing and Analysis (ISSTA). He is General Chair of the upcoming ACM SIGSOFT Symposium on Foundations of Software Engineering (FSE) 2022. His former doctoral students have been placed at universities all over the world (including Peking University, Monash, and University College London) as academics and have received recognition for their doctoral research including an ACM SIGSOFT Outstanding Doctoral Dissertation Award. Abhik received his Ph.D. in Computer Science from the State University of New York at Stony Brook in 2000.



Fake News During the 2016 U.S. Presidential Elections: Prevalence, Agenda, and Stickiness.
Ceren Budak | University of Michigan

2019-06-10, 10:30 - 12:00
Saarbrücken building E1 5, room 005 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

The spread of fake news was one of the most discussed characteristics of the 2016 U.S. Presidential Election. The concerns regarding fake news have garnered significant attention in both media and policy circles, with some journalists even going as far as claiming that results of the 2016 election were a consequence of the spread of fake news. Yet, little is known about the prevalence and focus of such content, how its prevalence changed over time, and how this prevalence related to important election dynamics. In this talk, I will address these questions by examining social media, news media, and interview data. These datasets allow examining the interplay between news media production and consumption, social media behavior, and the information the electorate retained about the presidential candidates leading up to the election.

Speaker's bio:

-



Automated Test Generation: A Journey from Symbolic Execution to Smart Fuzzing and Beyond
Koushik Sen | UC Berkeley

2019-06-04, 10:30 - 11:45
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

In the last two decades, automation has had a significant impact on software testing and analysis. Automated testing techniques, such as symbolic execution, concolic testing, and feedback-directed fuzzing, have found numerous critical faults, security vulnerabilities, and performance bottlenecks in mature and well-tested software systems. The key strength of automated techniques is their ability to quickly search state spaces by performing repetitive and expensive computational tasks at a rate far beyond the human attention span and computation speed. In this talk, I will give a brief overview of our past and recent research contributions in automated test generation using symbolic execution, program analysis, constraint solving, and fuzzing. I will also describe a new technique, called constraint-directed fuzzing, where given a pre-condition on a program as a logical formula, we can efficiently generate millions of test inputs satisfying the pre-condition.

Speaker's bio:

Koushik Sen is a professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interest lies in Software Engineering, Programming Languages, and Formal methods. He is interested in developing software tools and methodologies that improve programmer productivity and software quality. He is best known for his work on "DART: Directed Automated Random Testing" and concolic testing. He has received a NSF CAREER Award in 2008, a Haifa Verification Conference (HVC) Award in 2009, a IFIP TC2 Manfred Paul Award for Excellence in Software: Theory and Practice in 2010, a Sloan Foundation Fellowship in 2011, a Professor R. Narasimhan Lecture Award in 2014, an Okawa Foundation Research Grant in 2015, and an ACM SIGSOFT Impact Paper Award in 2019. He has won several ACM SIGSOFT Distinguished Paper Awards. He received the C.L. and Jane W-S. Liu Award in 2004, the C. W. Gear Outstanding Graduate Award in 2005, and the David J. Kuck Outstanding Ph.D. Thesis Award in 2007, and a Distinguished Alumni Educator Award in 2014 from the UIUC Department of Computer Science. He holds a B.Tech from Indian Institute of Technology, Kanpur, and M.S. and Ph.D. in CS from University of Illinois at Urbana-Champaign.



High Performance Operating Systems in the Data Center
Tom Anderson | University of Washington

2019-05-31, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

The ongoing shift of enterprise computing to the cloud provides an opportunity to rethink operating systems for this new setting. I will discuss two specific technologies, kernel bypass for high performance networking and low latency non-volatile storage, and their implications for operating system design. In each case, delivering the performance of the underlying hardware requires novel approaches to the division of labor between hardware, the operating system kernel, and the application library.

Speaker's bio:

-



Systematic Approach to Managing Software Defined Networks
Theopilus Benson | Brown University

2019-05-23, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Software-defined Networks and programmable data-planes represent a shift in networking paradigm, which enables novel applications. Despite the growing interest and adoption of SDNs, SDNs remain plagued with availability and performance problems. In this talk, I discuss recent and ongoing work by my group to analyze these paradigms and create systematic abstractions that provide control over performance and availability. First, I will discuss Tardis a system that improves fault tolerance by leveraging the novel programmability provided by SDNs to identify and transform the failure-inducing event(s). Second, I will discuss a pair of projects, Hermes and SCC, that revisits traditional storage principles and applies them to network updates. Through this work, I will demonstrate how the centralization and programmability offered by SDNs enables us to more systematically reason about traditional networking issues such as availability and performance.

Speaker's bio:

Theo is an assistant professor in the Department of Computer Science at Brown University. His group works on designing frameworks and algorithms for solving practical networking problems with an emphasis on speeding up the internet, improving network reliability,  and simplifying network management.  He has won multiple awards including best paper awards, an applied network research prize, various Yahoo! and Facebook Faculty Awards, and an NSF Career award.



On the Predictability of Heterogeneous SoC Multicore Platforms
Dr Giovani Gracioli | Technical University Munich

2019-05-20, 10:30 - 12:00
Saarbrücken building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Multiprocessor Systems-on-Chip (MPSoC) integrating hard processing cores with programmable logic (PL) are becoming increasingly common. While these platforms have been originally designed for high performance computing applications, their rich feature set can be exploited to efficiently implement mixed criticality domains serving both critical hard real-time tasks, as well as soft real-time tasks.

In this talk, we show how one can tailor these MPSoCs to support a mixed criticality system, where cores are strictly isolated to avoid contention on shared resources such as Last-Level Cache (LLC) and main memory. We present and discuss a set of software and hardware techniques to improve the predictability using a modern MPSoC platform. We evaluate our techniques using an image processing application and show the maximum supported processing frequency.

Speaker's bio:

Giovani Gracioli received his Ph.D. in Automation and Systems Engineering from the Federal University of Santa Catarina (UFSC), Brazil in 2014. Since 2013, he is assistant professor at UFSC. From 2017 to 2018, he has been a visiting professor in the Department of Electrical and Computer Engineering at the University of Waterloo, Canada. Currently, Giovani Gracioli is a research associate at the Technical University of Munich, Germany. His research interests include real-time, embedded, and operating systems.



Transparent Scaling of Deep Learning Systems through Dataflow Graph Analysis
Jinyang Li | New York University

2019-05-17, 10:30 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

As deep learning research pushes towards using larger and more sophisticated models, system infrastructure must use many GPUs efficiently. Analyzing the dataflow graph that represents the DNN computation is a promising avenue for optimization. By specializing execution for a given dataflow graph, we can accelerate DNN computation in ways that are transparent to programmers. In this talk, I show the benefits of dataflow graph analysis by discussing two recent systems that we've built to support large model training and low-latency inference. To train very large DNN models, Tofu automatically re-writes a dataflow graph of tensor operators into an equivalent parallel graph in which each original operator can be executed in parallel across multiple GPUs.  To achieve low-latency inference, Batchmaker discovers identical sub-graph computation among different requests to enable batched execution of requests arriving at different times. 

Speaker's bio:

Jinyang Li is a professor of computer science at New York University.  Her research is focused on developing better system infrastructure to accelerate machine learning and web applications. Most recently, her group has released DGL, an open-source library for programming graph neural networks.  Her honors include a NSF CAREER award, a Sloan Research Fellowship and multiple Google research awards.  She received her B.S. from National University of Singapore and her Ph.D. from MIT, both in Computer Science.



Humans and Machines: From Data Elicitation to Helper-AI
Goran Radanovic | Harvard University

2019-05-16, 14:00 - 15:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Recent AI advances have been driven by high-quality input data, often labeled by human annotators. A fundamental challenge in eliciting high-quality information from humans is that there is often no way to directly verify the quality of the information they provide. Consider, for example, product reviews and marketing surveys where data is inherently subjective, environmental community sensing where data is highly localized, and geopolitical forecasting where the ground truth is revealed in the distant future. In these settings, data elicitation has to rely on peer-consistency mechanisms, which incentivize high-quality reporting by examining the consistency between the reports of different data providers. In this talk, I will discuss some of the recent advances in peer-consistency designs. Furthermore, I will outline some thoughts on an agenda around the design of human-AI collaborative systems.

Speaker's bio:

Goran Radanovic is a postdoctoral researcher at Harvard University, where he works on problems related to human-AI collaboration, value-aligned artificial intelligence, and social computing. His research particularly focuses on incentive mechanism design, reinforcement learning with humans, and algorithmic fairness. He received his Ph.D. in Computer Science from the Swiss Federal Institute of Technology in Lausanne (EPFL) in 2016. He is a recipient of the Early Postdoc Mobility Fellowship (2016-2018) from the Swiss National Science Foundation, and was awarded with an EPFL Ph.D. Distinction for an outstanding dissertation in 2017.



Conclave: Secure Multi-Party Computation on Big Data
Nikolaj Volgushev | Boston University

2019-05-15, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Secure Multi-Party Computation (MPC) allows mutually distrusting parties to run joint computations without revealing private data. Current MPC algorithms scale poorly with data size, which makes MPC on "big data" prohibitively slow and inhibits its practical use. Many relational analytics queries can maintain MPC’s end-to-end security guarantee without using cryptographic MPC techniques for all operations. Conclave is a query compiler that accelerates such queries by transforming them into a combination of data-parallel, local cleartext processing and small MPC steps. When parties trust others with specific subsets of the data, Conclave applies new hybrid MPC-cleartext protocols to run additional steps outside of MPC and improve scalability further. Our Conclave prototype generates code for cleartext processing in Python and Spark, and for secure MPC using the Sharemind and Obliv-C frameworks. Conclave scales to data sets between three and six orders of magnitude larger than state-of-the-art MPC frameworks support on their own. Thanks to its hybrid protocols and additional optimizations, Conclave also substantially outperforms SMCQL, the most similar existing system.

Speaker's bio:

-



A constructive proof of dependent choice in classical arithmetic via memoization
Étienne Miquey | Inria, Nantes

2019-05-09, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

In 2012, Herbelin developed a calculus (dPAω) in which constructive proofs for the axioms of countable and dependent choices can be derived via the memoization of choice functions However, the property of normalization (and therefore the one of soundness) was only conjectured. The difficulty for the proof of normalization is due to the simultaneous presence of dependent dependent types (for the constructive part of the choice), of control operators (for classical logic), of coinductive objects (to encode functions of type ℕ→A into streams (a₀,a₁,…)) and of lazy evaluation with sharing (for these coinductive objects). Building on previous works, we introduce a variant of dPAω presented as a sequent calculus. On the one hand, we take advantage of a variant of Krivine classical realizability we developed to prove the normalization of classical call-by-need. On the other hand, we benefit of dLtp, a classical sequent calculus with dependent types in which type safety is ensured using delimited continuations together with a syntactic restriction. By combining the techniques developed in these papers, we manage to define a realizability interpretation à la Krivine of our calculus that allows us to prove normalization and soundness. This talk will go over the whole process, starting from Herbelin’s calculus dPAω until the introduction of its sequent calculus counterpart dLPAω that we prove to be sound.

Speaker's bio:

I am Étienne Miquey, currently doing a post-doc in the INRIA team Gallinette (in Nantes) where I mainly work with Guillaume Munch-Maccagnoni. Previously, I was a PhD student under the co-supervision of Hugo Herbelin (in the IRIF laboratory, Paris) and Alexandre Miquel (in the Mathematical Institute of the Faculty of Engineering of Montevideo). I am mainly interested in the computational content of proofs through the Curry-Howard correspondence, and especially in classical logic.



Sharing-Aware Resource Management for Performance and Protection
Sandhya Dwarkadas | Department of Computer Science, University of Rochester

2019-05-02, 10:00 - 11:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

Recognizing that applications (whether in mobile, desktop, or server environments) are rarely executed in isolation today, I will discuss some practical challenges in making best use of available hardware and our approach to addressing these challenges. I will describe two independent and complementary control mechanisms using low-overhead hardware performance counters that we have developed: a sharing- and resource-aware mapper (SAM) to effect task placement with the goal of localizing shared data communication and minimizing resource contention based on the offered load; and an application parallelism manager (MAPPER) that controls the offered load with the goal of improving system parallel efficiency. If time permits, I will also outline our work on streamlining instruction memory management and address translation to eliminate redundancy and improve efficiency, especially in mobile environments. Our results emphasize the need for low-overhead monitoring of application behavior under changing environmental conditions in order to adapt to environment and application behavior changes.

Speaker's bio:

Sandhya Dwarkadas is the Albert Arendt Hopeman Professor of Engineering, and Professor and Chair of Computer Science with a secondary appointment in Electrical and Computer Engineering, at University of Rochester, where she has been on the faculty since 1996. She received her Bachelor's degree from the Indian Institute of Technology, Madras, India, and her M.S. and Ph.D. from Rice University. She is a fellow of the ACM and IEEE. She is also a member of the board and steering committee for the computing Research Association's Committee on the Status of Women in Computing Research (CRA-W). Her areas of research interest include parallel and distributed computing, computer architecture and the interaction and interface between the compiler, runtime/operating system, and underlying architecture. Her research lies at the intersection of computer hardware and software with a particular focus on support for parallelism. She has made fundamental contributions to the design and implementation of shared memory both in hardware and in software, and to hardware and software energy- and resource-aware configurability. URL: http://www.cs.rochester.edu/u/sandhya



Edge Computing in the Extreme and its Applications
Suman Banerjee | University of Wisconsin-Madison

2019-04-30, 13:00 - 14:30
Saarbrücken building E1 5, room 105 / simultaneous videocast to Kaiserslautern building G26, room 111 / Meeting ID: 6312

Abstract:

The notion of edge computing introduces new computing functions away from centralized locations and closer to the network edge and thus facilitating new applications and services. This enhanced computing paradigm is provides new opportunities to applications developers, not available otherwise. In this talk, I will discuss why placing computation functions at the extreme edge of our network infrastructure, i.e., in wireless Access Points and home set-top boxes, is particularly beneficial for a large class of emerging applications. I will discuss a specific approach, called ParaDrop, to implement such edge computing functionalities, and use examples from different domains -- smarter homes, sustainability, and intelligent transportation -- to illustrate the new opportunities around this concept.

Speaker's bio:

-



Querying Regular Languages over Sliding Windows
Moses Ganardi | University of Siegen

2019-04-29, 14:30 - 15:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

A sliding window algorithm for a language L receives a stream of symbols and has to decide at each time step whether the suffix of length n belongs to L or not. The window size n is either a fixed number (in the fixed-size model) or can be controlled online by an adversary (in the variable-size model). In this talk we give a survey on recent results for deterministic and randomized sliding window algorithms for regular languages.

Speaker's bio:

Moses Ganardi is a PhD student in Computer Science at the University of Siegen, Germany, under the supervision of Prof. Dr. Markus Lohrey. His main research interests are formal languages and automata, and their applications in the context of streaming algorithms. He received his B.Sc. degree in Computer Science and Mathematics, in 2011 and 2012, respectively, and his M.Sc. degree in Computer Science in 2013 from the RWTH Aachen, Germany.



The complexity of reachability in vector addition systems
Sylvain Schmitz | École Normale Supérieure Paris-Saclay

2019-04-05, 14:00 - 15:00
Saarbrücken building G26, room 111 / simultaneous videocast to Kaiserslautern building E1 5, room 029 / Meeting ID: 6312

Abstract:

The last few years have seen a surge of decision problems with an astronomical, non primitive-recursive complexity, in logic, verification, and games. While the existence of such uncontrived problems is known since the early 1980s, the field has matured with techniques for proving complexity lower and upper bounds and the definition of suitable complexity classes. This framework has been especially successful for analysing so-called `well-structured systems'---i.e., transition systems endowed with a well-quasi-order, which underly most of these astronomical problems---, but it turns out to be applicable to other decision problems that resisted analysis, including two famous problems: reachability in vector addition systems and bisimilarity of pushdown automata. In this talk, I will explain how some of these techniques apply to reachability in vector addition systems, yielding tight Ackermannian upper bounds for the decomposition algorithm initially invented by Mayr, Kosaraju, and Lambert; this will be based on joint work with Jérôme Leroux.

Speaker's bio:

Sylvain Schmitz is an assistant professor (maître de conférences) at École Normale Supérieure Paris-Saclay since 2008 and a junior member of Institut Universitaire de France since 2018. He received his Ph.D. degree in Computer Science from Université Nice-Sophia Antipolis in 2007 and his habilitation from École Normale Supérieure Paris-Saclay in 2017. His research interests are in logic, verification, and complexity.



Worst-Case Execution Time Guarantees for Runtime-Reconfigurable Architectures
Marvin Damschen | Karlsruhe Institute of Technology

2019-04-04, 14:00 - 15:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029 / Meeting ID: 6312

Abstract:

Real-time embedded systems need to be analyzable for execution time guarantees. Despite significant scientific advances, however, timing analysis lags years behind current microarchitectures with out-of-order scheduling pipelines, several hardware threads and multiple (shared)cache layers. To satisfy the increasing demand for predictable performance, analyzable performance features are required. We introduce runtime-reconfigurable instruction set processors as one way to escape the scarcity of analyzable performance features while preserving the flexibility of the system. To this end, we first present a reconfiguration controller for guaranteed reconfiguration delays of accelerators onto an FPGA. We propose a novel timing analysis approach to obtain worst-case execution time (WCET) guarantees for applications that utilize runtime-reconfigurable custom instructions (CIs), which each utilize one or more accelerators. Given the constrained reconfigurable area of an FPGA, we solve the problem of selecting CIs for each computational kernel of an application to optimize its worst-case execution time. Finally, we show that runtime reconfiguration provides the unique feature of optimized static WCET guarantees and optimization of the average-case execution time (maintaining statically-given WCET guarantees) by repurposing reconfigurable area for different selections of CIs at runtime.

Speaker's bio:

Marvin Damschen received his Ph.D. (Dr.-Ing.) in Computer Science from the Karlsruhe Institute of Technology (KIT), Germany, under the supervision of Prof. Dr. Jörg Henkel in Dec. 2018. Currently, he is a postdoctoral researcher at the Chair for Embedded Systems at KIT. His main research interests are timing analysis and architectures for real-time embedded systems with special focus on runtime-reconfigurable architectures. Marvin Damschen received a B.Sc. degree - with distinction - and M.Sc. degree - with distinction - in Computer Science with a minor in Mathematics from the University of Paderborn, Germany, in 2012 and 2014, respectively.



Worst-Case Execution Time Guarantees for Runtime-Reconfigurable Architectures
Marvin Damschen | Karlsruhe Institute of Technology

2019-04-01, 11:00 - 12:00
Saarbrücken building E1 4, room 024

Abstract:

Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily (offline and online) lives. Social media, e-commerce, professional, political, educational, and dating sites, to mention just a few, shape our possibilities as individuals, consumers, employees, voters, students, and lovers. In this process, vast amounts of personal data are collected and used to train machine-learning based systems. These systems are used to classify and rank people, and can discriminate us on grounds such as gender, age, or ethnicity, even without intention, and even if legally protected attributes, such as race, are not explicit in the data. Algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data.

Speaker's bio:

Carlos Castillo is a Distinguished Research Professor at Universitat Pompeu Fabra in Barcelona. He is a web miner with a background on information retrieval, and has been influential in the areas of web content quality and credibility, and adversarial web search. He is a prolific researcher with more than 80 publications in top-tier international conferences and journals, receiving 13,000+ citations. His works include a book on Big Crisis Data, as well as monographs on Information and Influence Propagation, and Adversarial Web Search.



Feedback-Control for Self-Adaptive Predictable Computing
Martina Maggio | Lund University

2019-03-13, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Cloud computing gives the illusion of infinite computational capacity and allows for on-demand resource provisioning. As a result, over the last few years, the cloud computing model has experienced widespread industrial adoption and companies like Netflix offloaded their entire infrastructure to the cloud. However, with even the largest datacenter being of a finite size, cloud infrastructures have experienced overload due to overbooking or transient failures. In essence, this is an excellent opportunity for the design of control solutions, that tackle the problem of mitigating overload peaks, using feedback from the infrastructure. These solutions can then exploit control-theoretical principles and take advantage of the knowledge and the analysis capabilities of control tools to provide formal guarantees on the predictability of the infrastructure behavior. This talk introduces recent research advances on feedback control in the cloud computing domain, together with my research agenda for enhancing predictability and formal guarantees for cloud computing.

Speaker's bio:

Martina Maggio is an Associate Professor at the Department of Automatic Control, Lund University. Her research area is the application of control-theoretical techniques to computing systems' problems. Martina completed her Ph.D. at Politecnico di Milano, working with Alberto Leva. During her Ph.D., she spent one year as a visiting graduate student at the Computer Science and Artificial Intelligence Laboratory at MIT, working with Anant Agarwal and Hank Hoffmann on the Self-Aware Computing project. She joined Lund University in 2012 as a postdoctoral researcher, working with Karl-Erik Årzén. Since then, her research has been mainly focused on resource allocation for cloud infrastructures and real-time systems. Martina became an Assistant Professor in 2014, and then Docent and Associate Professor in 2017.



Automated Resource Management in Large-Scale Networked Systems
Sangeetha Abdu Jyothi | University of Illinois

2019-03-11, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Internet applications rely on large-scale networked environments such as the cloud for their backend support. In these multi-tenanted environments, various stakeholders have diverse goals. The objective of the infrastructure provider is to increase revenue by utilizing the resources efficiently. Applications, on the other hand, want to meet their performance requirements at minimal cost. However, estimating the exact amount of resources required to meet the application needs is a difficult task, even for expert users. Easy workarounds employed for tackling this problem, such as resource over-provisioning, negatively impact the goals of the provider, applications, or both. In this talk, I will discuss the design of application-aware self-optimizing systems through automated resource management that helps meet the varied goals of the provider and applications in large-scale networked environments. The key steps in closed-loop resource management include learning of application resource needs, efficient scheduling of resources, and adaptation to variations in real time. I will describe how I apply this high-level approach in two distinct environments using (a) Morpheus in enterprise clusters, and (b) Patronus in cellular provider networks with geo-distributed micro data centers. I will also touch upon my related work in application-specific context at the intersection of network scheduling and deep learning. I will conclude with my vision for self-optimizing systems including fully automated clouds and an elastic geo-distributed platform for thousands of micro data centers.

Speaker's bio:

Sangeetha Abdu Jyothi is a Ph.D. candidate at the University of Illinois at Urbana-Champaign. Her research interests lie in the areas of computer networking and systems with a focus on building application-aware self-optimizing systems through automated resource management. She is a winner of the Facebook Graduate Fellowship (2017-2019) and the Mavis Future Faculty Fellowship (2017-2018). She was invited to attend the Rising Stars in EECS workshop at MIT (2018). Website: http://abdujyo2.web.engr.illinois.edu



Predictable Execution of Real-Time Applications on Many-Core Platforms
Matthias Becker | KTH Royal Institute of Technology

2019-03-08, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Nowadays, innovation in many industrial areas is software driven, where existing software functions become more complex and new software functions are constantly introduced. The rapid increase in functionality comes along with a steep increase in software complexity. To cope with this transition, current trends shift away from today’s distributed architectures towards integrated architectures. Here, previously distributed functionality is consolidated on fewer, more powerful, computers. Such a trend can for example be observed in the automotive or avionics domain. This can ease the integration process, reduce the hardware complexity, and ultimately save costs. One promising hardware platform for these powerful embedded computers is the many-core processor. A many-core processor hosts a vast number of compute cores, that are partitioned on clusters which are connected by a Network-on-Chip. However, ensuring that real-time requirements are satisfied in the presence of contention in shared resources, such as memories, remains an open issue. In addition, industrial applications are often subject to timing constraints on the data propagation through a chain of semantically related tasks. Such requirements pose challenges to the system designer as they are only able to verify them after the system synthesis (i.e. very late in the design process). In this talk, we present methods that transform timing constraints on the data propagation delay into precedence constraints between individual task instances. An execution framework for the cluster of the many-core is proposed that allows access to cluster external memory while it avoids contention on shared resources by design. Spatial and temporal isolation between different clusters is provided by a partitioning and configuration of the Network-on-Chip that further reduces the worst-case access times to external memory.

Speaker's bio:

Matthias Becker is a postdoc researcher at KTH Royal Institute of Technology since February 2018. He received his B.Eng. degree in Mechatronics/Automation Systems from the University of Applied Sciences Esslingen, Germany in 2011. In the year 2013 he got his M.Sc. degree in Computer Science specializing in embedded computing from the University of Applied Sciences Munich, Germany. He received his Licentiate and PhD degree in Computer Science and Engineering from Mälardalen University, Sweden in 2015 and 2017 respectively. Matthias has been a visiting researcher at CISTER - Research Centre in Real-Time and Embedded Computing Systems in Porto, Portugal for two months in 2015 and for three months in 2016.



Privacy, Transparency and Trust in the User-Centric Internet
Oana Goga | Université Grenoble Alpes

2019-03-07, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The rise of user-centric Internet systems such as Facebook or Twitter brought security and privacy threats that became out of control in recent years. To make such systems more dependable, my research focuses on three key aspects: (1) privacy: ensure users understand and can control the information that is disclosed about them; (2) transparency: ensure users understand how their data is being used and how it affects the services they receive; and (3) trust: ensure users can evaluate the trustworthiness of content consumed from these systems. 

In this talk, I will share my research efforts in understanding and tackling security and privacy threats in social media targeted advertising. Despite a number of recent controversies regarding privacy violations, lack of transparency, or vulnerability to discrimination or propaganda by dishonest actors; users still have little understanding of what data targeted advertising platforms have about them and why they are shown the ads they see. To address such concerns, Facebook recently introduced the "Why am I seeing this?" button that provides users with an explanation of why they were shown a particular ad. I first investigate the level of transparency provided by this mechanism by empirically measuring whether it satisfies a number of key properties and what are the consequences of the current design choices. To provide a better understanding of the Facebook advertising ecosystem, we developed a tool called AdAnalyst that collects the ads users receive and provides aggregate statistics. I will then share our findings from analyzing data from over 600 real-world AdAnalyst users; in particular on who is advertising on Facebook and how these advertisers are targeting users and customizing ads via the platform. 

Speaker's bio:

Oana Goga is a tenured CNRS research scientist in the Laboratoire d’Informatique Grenoble (France) since October 2017. Prior to this, she was a postdoc at the Max Plank Institute for Software Systems and obtained a PhD in 2014 from Pierre et Marie Curie University in Paris. She is the recipient of a young researcher award from the French National Research Agency (ANR). Her research interests are in security and privacy issues that arise in online systems that have at their core user provided data. Her recent work on security and privacy issues in social media advertising led to changes in the Facebook advertising platform. 



Towards Literate Artificial Intelligence
Mrinmaya Sachan | Carnegie Mellon University

2019-03-05, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Over the past decade, the field of artificial intelligence (AI) has seen striking developments. Yet, today’s AI systems sorely lack the essence of human intelligence i.e.  our ability to (a) understand language and grasp its meaning, (b) assimilate common-sense background knowledge of the world, and (c) draw inferences and perform reasoning. Before we even begin to build AI systems that possess the aforementioned human abilities, we must ask an even more fundamental question: How would we even evaluate an AI system on the aforementioned abilities? In this talk, I will argue that we can evaluate AI systems in the same way as we evaluate our children - by giving them standardized tests. Standardized tests are administered to students to measure the knowledge and skills gained by them. Thus, it is natural to use these tests to measure the intelligence of our AI systems. Then, I will describe Parsing to Programs (P2P), a framework that combines ideas from semantic parsing and probabilistic programming for situated question answering. We used P2P to build systems that can solve pre-university level Euclidean geometry and Newtonian physics examinations. P2P achieves a performance at least as well as the average student on questions from textbooks, geometry questions from previous SAT exams, and mechanics questions from Advanced Placement (AP) exams. I will conclude by describing implications of this research and some ideas for future work.

Speaker's bio:

Mrinmaya Sachan is a Ph.D. candidate in the Machine Learning Department in the School of Computer Science at Carnegie Mellon University. His research is in the interface of machine learning, natural language processing, knowledge discovery and reasoning. He received an outstanding paper award at ACL 2015, multiple fellowships (IBM fellowship, Siebel scholarship and CMU CMLH fellowship) and was a finalist for the Facebook fellowship. Before graduate school, he graduated with a B.Tech. in Computer Science and Engineering from IIT Kanpur with an Academic Excellence Award.



A Client-centric Approach to Transactional Datastores
Natacha Crooks | University of Texas at Austin

2019-02-28, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Modern applications must collect and store massive amounts of data. Cloud storage offers these applications simplicity: the abstraction of a failure-free, perfectly scalable black-box. While appealing, offloading data to the cloud is not without challenges. Cloud storage systems often favour weaker levels of isolation and consistency. These weaker guarantees introduce behaviours that, without care, can break application logic. Offloading data to an untrusted third party like the cloud also raises questions of security and privacy.

This talk summarises my efforts to improve the performance, the semantics and the security of transactional cloud storage systems. It centers around a simple idea: defining consistency guarantees from the perspective of the applications that observe these guarantees, rather than from the perspective of the systems that implement them. I will discuss how this new perspective brings forth several benefits. First, it offers simpler and cleaner definitions of weak isolation and consistency guarantees. Second, it enables more scalable implementations of existing guarantees like causal consistency. Finally, I will discuss its applications to security: our client-centric perspective allows us to add obliviousness guarantees to transactional cloud storage systems.

Speaker's bio:

Natacha Crooks is a PhD candidate at the University of Texas at Austin and a visiting student at Cornell University. Her research interests are in distributed systems, distributed computing and databases. She is the recipient of a Google Doctoral Fellowship in Distributed Computing and a Microsoft Research Women Fellowship.



New Abstractions for High-Performance Datacenter Applications
Malte Schwarzkopf | MIT CSAIL

2019-02-18, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Developing high-performance datacenter applications is complex and time-consuming today, as developers must understand and correctly implement subtle interactions between different backend systems. I describe a new approach that redesigns core datacenter systems around new abstractions: the right abstractions substantially reduce complexity while maintaining the same performance. This saves expensive developer time, uses datacenter servers more efficiently, and can enable new, previously impossible systems and applications. I illustrate the impact of such redesigns with Noria, which recasts web application backends-i.e., databases and caches-as a streaming dataflow computation based on a new abstraction of partial state. Noria's partially-stateful dataflow brings classic databases' familiar query flexibility to scalable dataflow systems, simplifying applications and improving the backend's efficiency. For example, Noria increases the request load handled by a single server by 5-70x compared to state-of-the-art backends. Additional new abstractions from my research increase the efficiency of other datacenter systems (e.g., cluster schedulers), or enable new kinds of systems that, for example, may help protect user data against exposure through application bugs.

Speaker's bio:

Malte Schwarzkopf is a postdoc at MIT CSAIL, where he is a member of the Parallel and Distributed Operating Systems (PDOS) group. In his research, Malte designs and builds systems that aim to be both efficient and easy to use, and some of these systems have already impacted industry practice. Malte received both his B.A. and Ph.D. from the University of Cambridge, and his research has won an NSDI Best Paper Award and a EuroSys Best Student Paper Award.



Dynamic Symbolic Execution for Software Analysis
Cristian Cadar | Imperial College London

2019-02-07, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Symbolic execution is a program analysis technique that can automatically explore and analyse paths through a program. While symbolic execution was initially introduced in the seventies, it has only received significant attention during the last decade, due to tremendous advances in constraint solving technology and effective blending of symbolic and concrete execution into what is often called dynamic symbolic execution. Dynamic symbolic execution is now a key ingredient in many computer science areas, such as software engineering, computer security, and software systems, to name just a few. In this talk, I will discuss recent advances and ongoing challenges in the area of dynamic symbolic execution, drawing upon our experience developing several symbolic execution tools for many different scenarios, such as high-coverage test input generation, bug and security vulnerability detection, patch testing and bounded verification, among many others.

Speaker's bio:

Cristian Cadar is a Reader in the Department of Computing at Imperial College London, where he leads the Software Reliability Group (http://srg.doc.ic.ac.uk ), working on automatic techniques for increasing the reliability and security of software systems. Cristian received an ERC Consolidator Grant in 2018, the HVC Award in 2017, the ACM CCS Test of Time Award in 2016, a British Computer Society Fellowship in 2016, the Jochen Liedtke Young Researcher Award in 2015 and an EPSRC Early-Career Fellowship in 2013. Many of the research techniques he co-authored have been open-sourced and used by several groups in both academia and industry. In particular, he is co-author and the principal maintainer of the KLEE symbolic execution system, a popular system with a large user base. Cristian has a PhD in Computer Science from Stanford University, and undergraduate and Master's degrees from the Massachusetts Institute of Technology.



How to Win a First-Order Safety Game
Helmut Seidl | TUM

2019-02-01, 10:30 - 11:30
Kaiserslautern building G26, room 111

Abstract:

First-order (FO) transition systems have recently attracted attention for the verification of parametric systems such as network protocols, software-defined networks or multi-agent workflows. Desirable properties of these systems such as functional correctness or non-interference have conveniently been formulated as safety properties. Here, we go one step further. Our goal is to verify safety, and also to develop techniques for automatically synthesizing strategies to enforce safety. For that reason, we generalize FO transition systems to FO safety games. We prove that the existence of a winning strategy of safety player in finite games is equivalent to second-order quantifier elimination. For monadic games, we provide a complete classification into decidable and undecidable cases. For games with non-monadic predicates, we concentrate on universal invariants only. We identify a non-trivial sub-class where safety is decidable. For the general case, we provide meaningful abstraction and refinement techniques for realizing a CEGAR based synthesis loop. Joint work with: Christian Müller, TUM Bernd Finkbeiner, Universität des Saarlandes

Speaker's bio:

Helmut Seidl graduated in Mathematics (1983) and received his Ph.D. degree in Computer Science (1986) from the Johann Wolfgang Goethe Universität, Frankfurt. He received his Dr. Habil. (1994) from the Universität des Saarlandes, Saarbrücken. 1994 he was appointed Full Professor at the University of Trier. Since 2003, he holds the chair for "Languages and Specification Formalisms" at TU München. From 2009 to 2018, he has been speaker of the research training group PUMA ("Programm- Und Modell-Analyse") and now is speaker of the research training group ConVeY ("Continuous Verification of CYber-Physical Systems"). His research interests include static analysis by abstract interpretation and model checking, efficient fixpoint algorithms, and expressive domains for numerical invariants. He has worked on dedicated analyses of concurrent and parametric systems, safety critical C programs, and cryptographic protocols. He is also interested in tree automata and corresponding classes of Horn clauses, in tree transducers and XML processing.



Automated Complexity Analysis of Rewrite Systems
Florian Frohn | RWTH Aachen

2019-01-22, 10:00 - 11:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Many approaches to analyze the complexity of programs automatically are based on transformations into integer or term rewrite systems. However, state-of-the-art tools that analyze the worst-case complexity of rewrite systems are restricted to the inference of upper bounds. In this talk, the first techniques for the inference of lower bounds on the worst-case complexity of integer and term rewrite systems are introduced. While upper bounds can prove the absence of performance-critical bugs, lower bounds can be used to find them.

For term rewriting, the power of the presented technique gives rise to the question whether the existence of a non-constant lower bound is decidable. Thus, the corresponding decidability results are also discussed in this talk.

Finally, to see the practical value of complexity analysis techniques for rewrite systems, we will have a glimpse at the transformation from Java programs to integer rewrite systems that is implemented in the tool AProVE.

Speaker's bio:

Florian Frohn is a research assistant at Lehr- und Forschungsgebiet Informatik 2 at RWTH Aachen. In December 2018, he successfully defended his PhD thesis.



Language dynamics in social media
Animesh Mukherjee | Indian Institute of Technology, Kharagpur

2018-12-13, 10:30 - 11:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 105

Abstract:

In this talk I shall outline a summary of our five year long initiative studying the temporal dynamics of various human language-like entities over the social media. Some of the topics that I plan to cover are (a)  how opinion conflicts could be effectively used for incivility detection in Twitter [CSCW 2018], (b) how word borrowings can be automatically identified from social signals [EMNLP 2017] and (c)  how hashtags in Twitter form compounds like natural language words (e.g., #Wikipedia+#Blackout=#WikipediaBlackout) that become way more popular than the individual constituent hashtags [CSCW 2016, Honorable Mention].

Speaker's bio:

Animesh Mukherjee is an Associate Professor in the Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur. He is also a Simons Associate, ICTP, Italy and an ACM Distinguished Speaker. His main research interests are in applying complex system approaches (mainly complex networks and agent-based simulations) to different problems in Computer Science including (a) human language evolution and change, (b) web social media, (c) information retrieval, and (d) natural language processing He regularly publishes in top conferences like ACM SIGKDD, ACM CIKM, ACM CSCW, ICWSM, ACL, EMNLP, COLING, ACM/IEEE JCDL and journals like PNAS, Sci Reports, ACM TKDD, ACM CACM, IEEE TKDE, IEEE JSAC, Phys. Rev, Europhysics Letters. He regularly serves on the programme committee of various top conferences like IJCAI, EMNLP, COLING. He has received many notable awards including the INAE Young Engineer Award 2012, INSA Medal for Young Scientists 2014, IBM Faculty Award 2015 and the Humboldt Fellowship for Experienced Researchers in 2017.



Survey Equivalence: An Information-theoretic Measure of Classifier Accuracy When the Ground Truth is Subjective
Paul Resnick | University of Michigan, School of Information

2018-11-27, 10:30 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Many classification tasks have no objective ground truth. Examples include: which content or explanation is "better" according to some community? is this comment toxic? what is the political leaning of this news article? The traditional modeling approach assumes each item has an objective true state that is perceived by humans with some random error. It fails to account for the fact that people have greater agreement on some items than others. I will describe an alternative model where the true state is a distribution over labels that raters from a specified population would assign to an item. This leads to information gain (mutual information) as a theoretically justified and computationally tractable measure of a classifier's quality, and an intuitive interpretation of information gain in terms of the sample size for a survey that would yield the same expected error rate.

Speaker's bio:

Paul Resnick is the Michael D. Cohen Collegiate Professor of Information and Associate Dean for Research at the University of Michigan School of Information. He was a pioneer in the fields of recommender systems and reputation systems. He recently started the Center for Social Media Responsibility, which encourages and helps social media platforms to meet their public responsibilities.



The Reachability Problem for Vector Addition Systems is Not Elementary
Wojciech Czerwiński | University of Warsaw

2018-11-22, 16:00 - 17:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

I will present a recent non-elementary lower bound for the complexity of reachability problem for Vector Addition Systems. I plan to show the main insights of the proof. In particular I will present a surprising equation on fractions, which is the core of the new source of hardness found in VASes.

Speaker's bio:

Wojciech Czerwiński is an assistant professor (pol. adiunkt) at the Institute of Informatics, Faculty of Mathematics, Informatics and Mechanics of the University of Warsaw. His interests include automata and logic, more concretely infinite state systems and separability problems.



More Realistic Scheduling Models and Analyses for Advanced Real-Time Embedded Systems
Georg von der Brueggen | TU Dortmund

2018-11-22, 14:30 - 15:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In real-time embedded systems, for each task the compliance to timing constraints has to be guaranteed in addition to the functional correctness. The first part of the talk considers the theoretical comparison of scheduling algorithms and schedulability tests by evaluating speedup factors for non-preemptive scheduling, which leads to a discussion about general problems of resource augmentation bounds. In addition, it is explained how utilization bounds can be parametrized, resulting in better bounds for specific scenarios, i.e., when analyzing non-preemptive Rate-Monotonic scheduling as well as task sets inspired by automotive applications.

In the second part, a setting similar to mixed-criticality systems is considered and the criticism on previous work in this area is detailed. Hence, a new system model that allows a better applicability to realistic scenarios, namely Systems with Dynamic Real-Time Guarantees, is explained. This model is extended to a multiprocessor scenario, considering CPU overheating as a possible cause for mixed-criticality behaviour. Finally, a way to determine the deadline-miss probability for such systems is described that drastically reduces the runtime of such calculations.

The third part discusses tasks with self-suspension behaviour, explains a fixed-relative-deadline strategy for segmented self-suspension tasks with one suspension interval, and details how this approach can be exploited in a resource-oriented partitioned scheduling. Furthermore, it is explained how the gap between the dynamic and the segmented self-suspension model can be bridged by hybrid models.

Speaker's bio:

-



Verified Secure Routing
Peter Müller | ETH Zurich

2018-11-19, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

SCION is a new Internet architecture that addresses many of the security vulnerabilities of today’s Internet. Its clean-slate design provides, among other properties, route control, failure isolation, and multi-path communication. The verifiedSCION project is an effort to formally verify the correctness and security of SCION. It aims to provide strong guarantees for the entire architecture, from the protocol design to its concrete implementation. The project uses stepwise refinement to prove that the protocol withstands increasingly strong attackers. The refinement proofs assume that all network components such as routers satisfy their specifications. This property is then verified separately using deductive program verification in separation logic. This talk will give an overview of the verifiedSCION project and explain, in particular, how we verify code-level properties such as memory safety, I/O behavior, and information flow security.

Speaker's bio:

Peter Müller has been Full Professor and head of the Chair of Programming Methodology at ETH Zurich since August 2008. His research focuses on languages, techniques, and tools for the development of correct software. His previous appointments include a position as Researcher at Microsoft Research in Redmond, an Assistant Professorship at ETH Zurich, and a position as Project Manager at Deutsche Bank in Frankfurt. Peter Müller received his PhD from the University of Hagen.



Feedback Control for Predictable Cloud Computing
Dr. Martina Maggio | Lund University

2018-11-14, 10:30 - 12:00
Saarbrücken building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Cloud computing gives the illusion of infinite computational capacity and allows for on-demand resource provisioning. As a result, over the last few years, the cloud computing model has experienced widespread industrial adoption and companies like Netflix offloaded their entire infrastructure to the cloud. However, with even the largest datacenter being of a finite size, cloud infrastructures have experienced overload due to overbooking or transient failures. In essence, this is an excellent opportunity for the design of control solutions, that tackle the problem of mitigating overload peaks, using feedback from the computing infrastructure. Exploiting control-theoretical principles and taking advantage of the knowledge and the analysis capabilities of control tools, it is possible to provide formal guarantees on the predictability of the cloud platform. This talk introduces recent research advances on feedback control in the cloud computing domain. This talk discusses control solutions and future research for both cloud application development, and infrastructure management. In particular, it covers application brownout, control-based load-balancing, and autoscaling.

Speaker's bio:

Martina Maggio is an Associate Professor at the Department of Automatic Control, Lund University. Her research area is the application of control-theoretical techniques to computing systems' problems. She completed her Ph.D. at Politecnico di Milano, on the applications of control-theoretical tools for the design of computing systems components. During her Ph.D., she spent one year as a visiting graduate student at the Computer Science and Artificial Intelligence Laboratory at MIT, working with Anant Agarwal and Hank Hoffmann on the Self-Aware Computing project, named one of the one of ten "World Changing Ideas" by Scientific American in 2011. She joined Lund University in 2012 as a postdoctoral researcher, working with Karl-Erik Årzén on resource allocation for cloud infrastructures and real-time systems. Martina became an Assistant Professor in 2014, and then Docent and Associate Professor in 2017.



Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning
Elissa Redmiles | University of Maryland

2018-11-13, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

A variety of experts -- computer scientists, policy makers, judges -- constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other and, perhaps more importantly, with the people for whom they are making these decisions: the users.

This raises a question: Is it possible to learn best practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people’s preferences and then infer best practice rather than using experts’ normative (prescriptive) determinations to define best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Speaker's bio:

Elissa Redmiles is a Ph.D. Candidate in Computer Science at the University of Maryland and has been a visiting researcher with the Max Planck Institute for Software Systems and the University of Zurich. Elissa’s research interests are broadly in the areas of security and privacy. She uses computational, economic, and social science methods to understand users’ security and privacy decision-making processes, specifically investigating inequalities that arise in these processes and mitigating those inequalities through the design of systems that facilitate safety equitably across users. Elissa is the recipient of a NSF Graduate Research Fellowship, a National Science Defense and Engineering Graduate Fellowship, and a Facebook Fellowship. Her work has appeared in popular press publications such as Scientific American, Business Insider, Newsweek, and CNET and has been recognized with the John Karat Usable Privacy and Security Student Research Award, a Distinguished Paper Award at USENIX Security 2018, and a University of Maryland Outstanding Graduate Student Award.



Fairness for Sequential Decision Making Algorithms
Hoda Heidari | ETH Zurich

2018-11-12, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Fairness considerations in settings where decisions are made by supervised learning algorithms (e.g. criminal risk assessment) has received considerable attention, recently. As the fairness literature continues to expand mostly around this canonical learning task, it is important to recognize that many real-world applications of ML fall outside the category of supervised, one-shot learning. In this presentation, I will talk about two scenarios in which algorithmic decisions are made in a sequential manner and over time. I will argue that in such settings, being fair---at a minimum---requires decisions to be "consistent" across individuals who arrive at different time steps, that is, similar individuals must be treated similarly. I will then talk about how such consistency constraints affect learning. 

In the first part of the talk, I will introduce a generic sequential decision making framework, in which at each time step the learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations it has made so far. I propose a general framework for post-processing predictions made by a black-box learning model, so that the resulting sequence of outcomes is guaranteed to be consistent. I show, both theoretically and via simulations, that imposing consistency constraints will not significantly slow down learning.

In the second part of the talk, I will focus on fairness considerations in a particular type of market---namely, combinatorial prediction markets---where traders can submit limit orders on various security bundles, and the market maker is tasked with executing these orders in a fair manner. The main challenge in running such market is that executing one order can potentially change the price of every other order in the book. I define the notion of a "fair trading path", which at a high level guarantees similar orders are executed similarly: no order executes at a price above its limit, and every order executes when its market price falls below its limit price. I present a market algorithm that respects these fairness conditions, and evaluate it using real combinatorial predictions made during the 2008 U.S. Presidential election. 

I will conclude by comparing my work with previous papers on fairness for online learning, and a list of directions for future work.

Speaker's bio:

Hoda Heidari is a Post-doctoral Fellow at the Machine Learning Institute at ETH Zurich working under supervision of Prof. Andreas Krause. She received her PhD in Computer and Information Science from the University of Pennsylvania where she was advised by Prof. Michael Kearns and Prof. Ali Jadbabaie. Her research interests are broadly in algorithmic economics and societal aspects of AI. In particular, she is interested in fairness considerations in online markets and data-driven decision making systems.



The Optimality Program in Parameterized Algorithms
Daniel Marx | Hungarian Academy of Sciences

2018-11-12, 10:30 - 11:30
Saarbrücken building E1 4, room 024

Abstract:

Parameterized complexity analyzes the computational complexity of NP-hard combinatorial problems in finer detail than classical complexity: instead of expressing the running time as a univariate function of the size $n$ of the input, one or more relevant parameters are defined and the running time is analyzed as a function depending on both the input size and these parameters. The goal is to obtain algorithms whose running time depends polynomially on the input size, but may have arbitrary (possibly exponential) dependence on the parameters. Moreover, we would like the dependence on the parameters to be as slowly growing as possible, to make it more likely that the algorithm is efficient in practice for small values of the parameters. In recent years, advances in parameterized algorithms and complexity have given us a tight understanding of how the parameter has to influence the running time for various problems. The talk will survey results of this form, showing that seemingly similar NP-hard problems can behave in very different ways if they are analyzed in the parameterized setting.

Speaker's bio:

http://www.cs.bme.hu/~dmarx/



Compiling Dynamical Systems for Efficient Simulation on Reconfigurable Analog Computers
Sara Achour | MIT

2018-10-22, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Programmable analog devices are a powerful new computing substrate that are especially appropriate for performing computationally intensive simulations of dynamical systems. Until recently, the de-facto standard for programming these devices required hardware specialists to manually program the analog device to model the dynamical system of interest. In this talk, I will present Arco, a compiler that automatically configures analog devices to perform dynamical system simulation, and Jaunt, a compilation technique that that scales dynamical system parameters to change the speed of the simulation and render the resulting simulation physically realizable given the operating constraints of the analog hardware platform. These techniques capture the domain knowledge required to fully exploit the capabilities of reconfigurable analog devices, eliminating a key obstacle to the widespread adoption of these devices.

Speaker's bio:

Sara Achour is a PhD candidate at the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology (CSAIL MIT) and a NSF Fellowship recipient. Her current research focuses on compilation techniques for reconfigurable analog devices. Her broader research interests focus on developing automated techniques for nontraditional computational platforms and devices.



Justified representation in multiwinner voting: axioms and algorithms
Edith Elkind | University of Oxford

2018-10-19, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Suppose that a group of voters wants to select k \ge 1 alternatives from a given set, and each voter indicates which of the alternatives are acceptable to her: the alternatives could be conference submissions, applicants for a scholarship or locations for a fast food chain. In this setting it is natural to require that the winning set represents the voters fairly, in the sense that large groups of voters with similar preferences have at least some of their approved alternatives in the winning set. We describe several ways to formalize this idea, and show how to use it to classify voting rules. For one of our axioms, the only known voting rule that satisfies it is not polynomial-time computable, and it was conjectured that no voting rule that satisfies this axiom can be polynomial-time computable; however, we will show that this is not the case.

Speaker's bio:

Edith Elkind researches game theory and the computation of social choices. She looks at the decisions involved in multi-agent systems such as auctions, elections and co-operative games.



AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
Dana Drachsler Cohen | ETH Zurich

2018-10-15, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In this talk, I will present AI2, a sound and scalable analyzer for deep neural networks. Based on overapproximation, AI2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks). The key insight behind AI2 is to phrase reasoning about safety and robustness of neural networks in terms of classic abstract interpretation, enabling to leverage decades of advances in that area. To this end, I will introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling layers. This allows to handle real-world neural networks, which are often built out of these types of layers. I will also empirically demonstrate that (i) AI2 is precise enough to prove useful specifications (e.g., robustness), (ii) AI2 can be used to certify the effectiveness of state-of-the-art defenses for neural networks, and (iii) AI2 is significantly faster than existing analyzers based on symbolic analysis, which often take hours to verify simple fully connected networks.

Speaker's bio:

Dana Drachsler Cohen is a postdoc in the Secure, Reliable, and Intelligent Systems Lab at the Computer Science Department at ETH Zurich. Her research interests span program synthesis, machine learning, security, and computer networks.



Improving the energy efficiency of virtualized datacenters
Vlad Nitu | Toulouse University

2018-09-21, 10:30 - 11:30
Kaiserslautern building G26, room 111

Abstract:

Energy consumption is an important concern for cloud datacenters. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US datacenters alone will spend about $13 billion on energy bills. Generally, the servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. This technology enables a cloud provider to pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with a set of long-term unused resources (called 'holes'). Our first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. However the datacenter resource fragmentation has a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. Our second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. Our third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs on based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the memory he paid for. Our fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, our system allows the VM to allocate memory on a remote physical server.

Speaker's bio:

Vlad Nitu is a PhD student at Toulouse University and interested in operating and visualization systems with a focus on optimizing their energy consumption. Starting October, he will be post-doc at EPFL in the group of Prof. Rachid Guerraoui.



Gradually Typed Symbolic Expressions: an Approach for Developing Embedded Domain-Specific Modeling Languages
David Broman | KTH Royal Institute of Technology

2018-09-13, 14:30 - 15:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Embedding a domain-specific language (DSL) in a general purpose host language is an efficient way to develop a new DSL. Various kinds of languages and paradigms can be used as host languages, including object-oriented, functional, statically typed, and dynamically typed variants, all having their pros and cons. For deep embedding, statically typed languages enable early checking and potentially good DSL error messages, instead of reporting runtime errors. Dynamically typed languages, on the other hand, enable flexible transformations, thus avoiding extensive boilerplate code. In this talk, I will discuss the concept of gradually typed symbolic expressions that mix static and dynamic typing for symbolic data. The key idea is to combine the strengths of dynamic and static typing in the context of deep embedding of DSLs. Moreover, I will briefly outline an ongoing research effort of developing a new framework for developing heterogenous domain-specific languages.

Speaker's bio:

David Broman is an Associate Professor at the KTH Royal Institute of Technology in Sweden, where he is leading the Model-based Computing Systems (MCS) research group. Between 2012 and 2014, he was a visiting scholar at the University of California, Berkeley, where he also was employed as a part time researcher until 2016. David received his Ph.D. in Computer Science in 2010 from Linköping University, Sweden, and was appointed Assistant Professor there in 2011. He earned a Docent degree in Computer Science in 2015. His research focuses on model-based design of time-aware systems, including cyber-physical systems, embedded systems, and real-time systems. In particular, he is interested in programming and modeling language theory, formal semantics, compilers, and machine learning. David has received an outstanding paper award at RTAS (co-authored 2018), the award as teacher of the year, selected by the student union at KTH (2017), the best paper award at IoTDI (co-authored 2017), awarded the Swedish Foundation for Strategic Research's individual grant for future research leaders (2016), and the best paper presentation award at CSSE&T (2010). He has worked several years within the software industry, co-founded three companies, co-founded the EOOLT workshop series, and is a member of IFIP WG 2.4, Modelica Association, and a senior member of IEEE.



Timed C: An Extension to the C Programming Language for Real-Time Systems
Saranya Natarajan | KTH Royal Institute of Technology

2018-09-12, 15:30 - 16:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 422

Abstract:

The design and implementation of real-time systems require that both the logical and the temporal behaviour are correct. There exist several specialized languages and tools that use the notion of logical time, as well as industrial strength languages such as Ada and RTJS that incorporate direct handling of real time. Although these languages and tools have shown to be good alternatives for safety-critical systems, most commodity real-time and embedded systems are today implemented in the standard C programming language. Such systems are typically targeting proprietary bare-metal platforms, standard POSIX compliant platforms, or open-source operating systems. It is, however, error prone to develop large, reliable, and portable systems based on these APIs. In this talk, I will talk about an extension to the C programming language, called Timed C, with a minimal set of language primitives, and show how a retargetable source-to-source compiler can be used to compile and execute simple, expressive, and portable programs

Speaker's bio:

Saranya Natarajan is a third year PhD student at the KTH Royal Institute of Technology, School of Computer Science and Electrical Engineering (EECS). She is pursuing her doctoral research under the guidance of David Broman. She received her master's degree from Indian Institute of Science, Bangalore in 2015. Her research interests span the areas of real-time systems, programming languages, and compilers



HAMS: Harnessing AutoMobiles for Safety
Venkat Padmanabhan | Microsoft Research India

2018-08-24, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Road safety is a major public health issue, with road accidents accounting for an estimated 1.25 million fatalities, and many more injuries, the world over, each year. The problem is particularly acute in India, with nearly a quarter of a million fatalities every year, i.e., 20% of the world’s total. Besides the heavy human cost, road accidents also impose a significant economic cost. The crux of the problem is that the major factors impacting safety — vehicles, roads, and people — have virtually no ongoing monitoring today. In the HAMS project at Microsoft Research India, we employ a dashboard-mounted smartphone, and the array of sensors it includes, as a virtual harness, with a view to monitoring drivers and their driving. We focus primarily on the camera sensors of the smartphone, both the front camera, which faces the driver, and the back camera, which looks out to the front of the vehicle. We address the challenges arising from our choice of low-cost generic sensors instead of more expensive specialized sensors, the need for efficient processing on a smartphone, and demands of robust operation in uncontrolled environments. HAMS has been piloted as part of a driver training program, with promising early results.

Speaker's bio:

Venkat Padmanabhan is a Principal Researcher at Microsoft Research India, where he founded the Mobility, Networks, and Systems group in 2007. He was previously with Microsoft Research Redmond, USA for nearly 9 years. Venkat’s research interests are broadly in networked and mobile systems, and his work over the years has led to highly-cited papers and paper awards, technology transfers within Microsoft, and also industry impact. He received the Shanti Swarup Bhatnagar Prize and the inaugural ACM SIGMOBILE Test-of-Time paper award, both in 2016. Venkat holds a B.Tech. from IIT Delhi and an M.S. and a Ph.D. from UC Berkeley, all in Computer Science, and has been elected a Fellow of the INAE, the IEEE, and the ACM. He can be reached online at http://research.microsoft.com/~padmanab/.



Not Your Typical Objects: Made from Raw Materials Augmented with Sensing and Computation
Phillip Stanley-Marbell | University of Cambridge

2018-08-21, 14:00 - 15:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Materials and manufacturing processes can be made to adapt to the way in which their end-products are used, by imbuing raw materials with sensing and computation elements. By making the algorithms that run on these computing elements aware of the physics of the objects in which they are embedded, computation-and-sensing augmented materials could change the way we think about the border between inanimate objects and computing systems. One way to exploit knowledge about the physical world in the programs running on (or in) these objects is to exploit the fact that these programs can often tolerate noise and other deviations from correctness in their input data. This talk will highlight research that builds on these observations.

Speaker's bio:

Phillip Stanley-Marbell is an Assistant Professor in the Department of Engineering at the University of Cambridge where he leads the Physical Computation Lab http://physcomp.eng.cam.ac.uk). His research focus is on exploiting an understanding of properties of the physical world and the physiology of human perception to make computing systems more efficient. Prior to joining the University of Cambridge, he was a researcher at MIT, from 2014 to 2017. He received his Ph.D. from CMU in 2007, was a postdoc at TU Eindhoven until 2008, and then a permanent Research Staff Member at IBM Research—Zürich (2008-2012). In 2012 he joined Apple where he led the development of a new system component now used across all iOS, watchOS, and macOS platforms. Prior to completing his Ph.D., he held positions at Bell-Labs Research (1995, 1996), Lucent Technologies and Philips (1999), and NEC Research Labs (2005).



Planar Graph Perfect Matching is in NC
Vijay V. Vazirani | University of California, Irvine

2018-08-20, 11:00 - 12:00
Saarbrücken building E1 4, room 024

Abstract:

Is perfect matching in NC? That is, is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in theoretical computer science for over three decades, ever since the discovery of RNC matching algorithms. Within this question, the case of planar graphs has remained an enigma: On the one hand, counting the number of perfect matchings is far harder than finding one (the former is #P-complete and the latter is in P), and on the other, for planar graphs, counting has long been known to be in NC whereas finding one has resisted a solution. In this paper, we give an NC algorithm for finding a perfect matching in a planar graph. Our algorithm uses the above-stated fact about counting matchings in a crucial way. Our main new idea is an NC algorithm for finding a face of the perfect matching polytope at which W(n) new conditions, involving constraints of the polytope, are simultaneously satisfied. Several other ideas are also needed, such as finding a point in the interior of the minimum weight face of this polytope and finding a balanced tight odd set in NC.

Speaker's bio:

-



On the expressive power of user-defined effects: Effect handlers, monadic reflection, delimited continuations
Sam Lindley | University of Edinburgh

2018-06-29, 14:00 - 15:30
Saarbrücken building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

We compare the expressive power of three programming abstractions for user-defined computational effects: Plotkin and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features.

We present each notion as an extension of a simply-typed core lambda-calculus with an effect type system. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro-express each other, providing we disregard types. Alas, not all of the translations are type-preserving; moreover, no alternative type-preserving macro translations exist. We show that if we add suitable notions of polymorphism to the core calculus and its extensions then all of the translations can be adapted to preserve typing.

(based on joint work with Yannick Forster, Ohad Kammar, and Matija Pretnar)

Speaker's bio:

-



Program Invariants
Joël Ouaknine | Max Planck Institute for Software Systems

2018-06-29, 11:30 - 12:30
Saarbrücken building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Automated invariant generation is a fundamental challenge in program analysis and verification, going back many decades, and remains a topic of active research. In this talk I'll present a select overview and survey of work on this problem, and discuss unexpected connections to other fields including algebraic geometry, group theory, and quantum computing. (No previous knowledge of these fields will be assumed.) This is joint work with Ehud Hrushovski, Amaury Pouly, and James Worrell.

Speaker's bio:

-



Designing a System for Heterogeneous Compute Units
Nils Asmussen | TU Dresden

2018-06-29, 10:30 - 11:30
Kaiserslautern building G26, room 111

Abstract:

The ongoing trend to more heterogeneity forces us to rethink the design of systems. In this talk, I will present a new system design that considers heterogeneous compute units (general-purpose cores with different instruction sets, DSPs, FPGAs, fixed-function accelerators, etc.) from the beginning instead of as an afterthought. The goal is to treat all compute units (CUs) as first-class citizens, enabling 1) isolation and secure communication between all types of CUs, 2) a direct interaction of all CUs to remove the conventional CPU from the critical path, and 3) access to OS services such as file systems and network stacks for all CUs. To study this system design, I am using a hardware/software co-design based on two key ideas: 1) introduce a new hardware component next to each CU used by the OS as the CUs' common interface and 2) let the OS kernel control applications remotely from a different CU. In my talk, I will show how this approach allows to support arbitrary CUs as aforementioned first-class citizens, ranging from fixed-function accelerators to complex general-purpose cores.

Speaker's bio:

-



Boosting human capabilities on perceptual categorization tasks
Michael Mozer | University of Colorado, Boulder

2018-06-26, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

We are developing methods to improve human learning and performance on challenging perceptual categorization tasks, e.g., bird species identification, diagnostic dermatology. Our approach involves inferring psychological embeddings -- internal representations that individuals use to reason about a domain. Using predictive cognitive models that operate on an embedding, we perform surrogate-based optimization to determine efficient and effective means of training domain novices as well as amplifying an individual's capabilities at any stage of training. Our cognitive models leverage psychological theories of: similarity judgement and generalization, contextual and sequential effects in choice, attention shifts among embedding dimensions.  Rather than searching over all possible training policies, we focus our search on policy spaces motivated by the training literature, including manipulation of exemplar difficulty and the sequencing of category labels. We show that our models predict human behavior not only in the aggregate but at the level of individual learners and individual exemplars, and preliminary experiments show the benefits of surrogate-based optimization on learning and perform ance.

This work was performed in close collaboration with Brett Roads at University College London.

Speaker's bio:

Michael Mozer received a Ph.D. in Cognitive Science at the University of California at San Diego in 1987.  Following a postdoctoral fellowship with Geoffrey Hinton at the University of Toronto, he joined the faculty at the University of Colorado at Boulder and is presently an Professor in the Department of Computer Science and the Institute of Cognitive Science.  He is secretary of the Neural Information Processing Systems Foundation, has served as Program Chair and General Chair at NIPS and as chair of the Cognitive Science Society. He is interested in human-centric artificial intelligence, which involves designing machine learning methods that leverage insights  from human cognition, and building software tools to optimize human performance using machine learning methods.



Learning-Based Hardware/Software Power and Performance Prediction
Andreas Gerstlauer | University of Texas at Austin

2018-06-11, 10:30 - 11:30
Saarbrücken building E1 5, room 105 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Next to performance, early power and energy estimation is a key challenge in the design of computer systems. Traditional simulation-based methods are often too slow while existing analytical models are often not sufficiently accurate. In this talk, I will present our work on bridging this gap by providing fast yet accurate alternatives for power and performance modeling of software and hardware. In the past, we have pioneered so-called source-level and host-compiled simulation techniques that are based on back-annotation of source code with semi-analytically estimated target metrics. More recently, we have studied alternative approaches in which we employ advanced machine learning techniques to synthesize analytical proxy models that can extract latent correlations and accurately predict time-varying power and performance of an application running on a target platform purely from data obtained while executing the application natively on a completely different host machine. We have developed such learning-based approaches for both hardware and software. On the hardware side, learning-based models for white-box and black-box hardware accelerators reach simulation speeds of 1 Mcycles/s at 97% accuracy. On the software side, depending on the granularity at which prediction is performed, cross-platform prediction can achieve more than 95% accuracy at more than 3 GIPS ofequivalent simulation throughput.

Speaker's bio:

Andreas Gerstlauer is an Associate Professor in Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Information and Computer Science from the University of California, Irvine (UCI) in 2004. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis.



Modern algorithms for Bin packing
Thomas Rothvoss | University of Washington, Seattle

2018-05-15, 10:00 - 11:00
Saarbrücken building E1 4, room 024

Abstract:

One of the fundamental NP-hard problems in combinatorial optimization is Bin Packing. In terms of the best polynomial time approximation algorithm, we show how to improve over the previous best algorithm by Karmarkar and Karp from 1981 by a quadratic factor. The crucial techniques come from recent developments in discrepancy theory, a sub-field of combinatorics.

Then we will consider the special case that the number of different item sizes is a constant. It had been open for at least 15 years, whether or not this case is solvable in polynomial time. We provide an affirmative answer to that.

The talk includes joint work with Michel X. Goemans and Rebecca Hoberg.

Speaker's bio:

Thomas Rothvoss did his PhD in Mathematics in 2009 at EPFL in Switzerland under Friedrich Eisenbrand. Then he was a PostDoc at MIT working with Michel Goemans. Since January 2014 he is Assistant Professor at the University of Washington, Seattle. He was (co-)winner of the best paper awards at STOC 2010, SODA 2014 and STOC 2014. He received an Sloan Research Fellowship, an NSF CAREER award and a Packard Fellowship.



Accountability in the Governance of Machine Learning
Joshua Kroll | School of Information, University of California, Berkeley

2018-05-07, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

As software systems, especially those based on machine learning and data analysis, become ever more deeply engrained in modern society and take on increasingly powerful roles in shaping people's lives, concerns have been raised about the fairness, equity, and other embedded values of these systems. Many definitions of "fairness" have been proposed, and the technical definitions capture a variety of desirable statistical invariants. However, such invariants may not address fairness for all stakeholders, may be in tension with each other or other desirable properties, and may not be recognized by people as capturing the correct notion of fairness. In addition, requirements that serve fairness, in practice, often are enacted by prohibiting a set of practices considered unfair rather than fully modeling a particular definition of fairness.

For these reasons, we attack the goal of producing fair systems from a different starting point. We argue that a focus on accountability and transparency in the design of a computer system is a stronger basis for reasoning about fairness. We outline a research agenda in responsible system design based on this approach, attacking both technical and non-technical open questions. Technology can help realize human values - including fairness - in computer systems, but only if it is supported by appropriate organizational best practices and a new approach to the system design life cycle.

As a first step toward realizing this agenda, we present a cryptographic protocol for accountable algorithms, which uses a combination of commitments and zero-knowledge proofs to construct audit logs for automated decision-making systems that are publicly verifiable for integrity. Such logs comprise an integral record of the behavior of a computer system, providing evidence for future interrogation, oversight, and review while also providing immediate public assurance of important procedural regularity properties, such as the guarantee that all decisions were made under the same policy. Further, the existence of such evidence provides a strong incentive for the system's designers to capture the right human values by making deviations from those values apparent and undeniable. Finally, we describe how such logs can be extended to demonstrate the existence of key fairness and transparency properties in machine-learning settings. For example, we consider how to demonstrate that a model was trained on particular data, that it operates without considering particular sensitive inputs, or that it satisfies particular fairness invariants of the type considered in the machine-learning fairness literature. This approach leads to a better, more complete, and more flexible outcome from the perspective of preventing unfairness.

Speaker's bio:

Joshua A. Kroll is a computer scientist studying the relationship between governance, public policy, and computer systems. As a Postdoctoral Research Scholar at the School of Information at the University of California at Berkeley, his research focuses on how technology fits within a human-driven, normative context and how it satisfies goals driven by ideals such as fairness, accountability, transparency, and ethics. He is most interested in the governance of automated decision-making systems, especially those using machine learning. His paper "Accountable Algorithms" in the University of Pennsylvania Law Review received the Future of Privacy Forum's Privacy Papers for Policymakers Award in 2017.

Joshua's previous work spans accountable algorithms, cryptography, software security, formal methods, Bitcoin, and the technical aspects of cybersecurity policy. He also spent two years working on cryptography and internet security at the web performance and security company Cloudflare. Joshua holds a PhD in computer science from Princeton University, where he received the National Science Foundation Graduate Research Fellowship in 2011.



#DebateNight :The Role and Influence of Socialbots on Twitter During the 1st U.S. Presidential Debate
Marian-Andrei Rizoiu | Australian National University

2018-05-03, 11:15 - 12:15
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Serious concerns have been raised about the role of `socialbots' in manipulating public opinion and influencing the outcome of elections by retweeting partisan content to increase its reach. Here we analyze the role and influence of socialbots on Twitter by determining how they contribute to retweet diffusions. We collect a large dataset of tweets during the 1st U.S. presidential debate in 2016 and we analyze its 1.5 million users from three perspectives: user influence, political behavior (partisanship and engagement) and botness. First, we define a measure of user influence based on the user's active contributions to information diffusions, i.e. their tweets and retweets. Given that Twitter does not expose the retweet structure -- it associates all retweets with the original tweet -- we model the latent diffusion structure using only tweet time and user features, and we implement a scalable novel approach to estimate influence over all possible unfoldings. Next, we use partisan hashtag analysis to quantify user political polarization and engagement. Finally, we use the BotOrNot API to measure user botness (the likelihood of being a bot). We build a two-dimensional "polarization map" that allows for a nuanced analysis of the interplay between botness, partisanship and influence. We find that not only are socialbots more active on Twitter -- starting more retweet cascades and retweeting more -- but they are 2.5 times more influential than humans, and more politically engaged. Moreover, pro-Republican bots are both more influential and more politically engaged than their pro-Democrat counterparts. However we caution against blanket statements that software designed to appear human dominates politics-related activity on Twitter. Firstly, it is known that accounts controlled by teams of humans (e.g. organizational accounts) are often identified as bots. Secondly, we find that many highly influential Twitter users are in fact pro-Democrat and that most pro-Republican users are mid-influential and likely to be human (low botness).

Speaker's bio:

Dr. Marian-Andrei Rizoiu is a Research Fellow with the Australian National University, studying the dynamics of human attention in the online environment. His research has made several key contributions, particularly to the areas of online popularity prediction and online privacy. For the past four years, he has been developing theoretical models for online information diffusion, which can account for complex social phenomena, such as the rise and fall of online popularity, the spread of misinformation or the adoption of disruptive technologies. He approached questions such as "Why did X become popular, but not Y?" and "How can items be promoted?" with implications in advertising and marketing. Marian-Andrei has also worked on detecting the evolution of privacy loss over time. His research has shown that privacy "leaks" over time and it identified the factors causing the loss: the individual's own actions and the environment. The conclusions were staggering: privacy continues to decrease even for users who retired from activity. Marian-Andrei published in the most selective venues of the field (such as WWW, WSDM, ICWSM or CIKM) and his work has received significant media attention, including from the Wikimedia Foundation for the work concerning the privacy of Wikipedia editors (which featured in the March 2016 Wikimedia Research Showcase). See more at www.rizoiu.eu



Maximizing the Social Good: Markets without Money
Nicole Immorlica | Microsoft Research New England

2018-05-03, 10:00 - 11:00
Saarbrücken building E1 4, room 024

Abstract:

To create a truly sustainable world, we need to generate ample resources and allocate them appropriately.  In traditional economics, these goals are achieved using money.  However, in many settings of particular social significance, monetary transactions are infeasible, be it due to ethical considerations or technological constraints.  In this talk, we will discuss alternatives to money, including risk, social status, and scarcity, and show how to use them to achieve socially-optimal outcomes.  Risk helps determine a person's value for a resource: the more someone is willing to risk for something, the more they value it.  Using this insight, we propose an algorithm to find a good assignment of students in school choice programs.  Social status helps motivate people to contribute to a public project.  Using this insight, we design badges to maximize contributions to user-generated content websites.  Scarcity forces people to evaluate trade-offs, allowing algorithms to infer the relative strength of their preference for different options.  Using this insight, we design voting schemes that select the most highly-valued alternative.

Speaker's bio:

-



Probabilistic Program Equivalence for NetKAT
Alexandra Silva | University College, London

2018-05-02, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

We tackle the problem of deciding whether two probabilistic programs are equivalent in the context of Probabilistic NetKAT, a formal language for reasoning about the behavior of packet-switched networks. The main challenge lies in reasoning about iteration, which we address by a reduction to finite-state absorbing Markov chains. Building on this approach, we develop an effective decision procedure based on stochastic matrices. Through an extended case study with a real-world data center network, we show how to use these techniques to automatically verify various properties of interests, including resilience in the presence of failures. This is joint work with Steffen Smolka, Praveen Kumar, Nate Foster, Justin Hsu, David Kahn, and Dexter Kozen.

Speaker's bio:

-



Measurements, predictions, and the puzzle of machine learning: what data from 10 million hosts can teach us about security
Tudor Dumitras | University of Maryland

2018-04-20, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

What are the odds that you will get hacked tomorrow? To answer this question, it is not enough to reason about the state of your host -- we must also understand how easy it is for adversaries to exploit software vulnerabilities and what helps them distribute malware around the world. Moreover, the machine learning techniques that drive the success of such prediction tasks in non-adversarial domains, like computer vision or autonomous driving, face new challenges in security. 

In this talk I will discuss my work, combining machine learning with global-scale measurements, that has exposed critical security threats and has guided industrial practices. First, I will present the Worldwide Intelligence Network Environment (WINE), an analytics platform that has enabled systematic security measurements across more than 10 million hosts from around the world. Second, I will use WINE as a vehicle for exploring open research questions, such as the duration and impact of zero-day attacks, the weaknesses in public key infrastructures (PKIs) that allow malware to masquerade as reputable software, and how we can use machine learning to predict certain security incidents. I will conclude by discussing the impact of these predictions on the emerging cyber insurance industry and the lessons we learned about using machine learning in the security domain.

Speaker's bio:

Tudor Dumitraș is an Assistant Professor in the Electrical & Computer Engineering Department at the University of Maryland, College Park. His research focuses on data-driven security: he studies real-world adversaries empirically, he builds machine  learning systems for detecting attacks and predicting security incidents, and he investigates the security of machine learning in adversarial environments. In his previous role at Symantec Research Labs he built the Worldwide Intelligence Network Environment (WINE) - a data analytics platform for security research. His work on the effectiveness of certificate revocations in the Web PKI was featured in the Research Highlights of the Communications of the ACM in 2018, and his measurement of the duration and prevalence of zero-day attacks received an Honorable Mention in the NSA competition for the Best Scientific Cybersecurity Paper of 2012. He also received the 2011 A. G. Jordan Award from the ECE Department at Carnegie Mellon University, the 2009 John Vlissides Award from ACM SIGPLAN, and the Best Paper Award at ASP-DAC'03. Tudor holds a Ph.D. degree from Carnegie Mellon University.



Lovasz meets Weisfeiler-Leman
Prof. Martin Grohe | RWTH Aachen University

2018-04-04, 10:30 - 11:30
Kaiserslautern building G26, room 111

Abstract:

I will speak about an unexpected correspondence between a beautiful theory, due to Lovasz, about homomorphisms and graph limits and a popular heuristic for the graph isomorphism problem known as the Weisfeiler-Leman algorithm. I will also relate this to graph kernels in machine learning. Indeed, the context of this work is to desgin and understand similarity measures between graphs and discrete structures.

Speaker's bio:

Prof. Martin Grohe is full professor at RWTH Aachen University where he heads the chair for Logic and Theory of Discrete Systems. His research interests are logic, algorithms und complexity, database theory, graph theory, and algorithmic learning theory.(Joint work with Holger Dell and Gaurav Rattan.)



Data Science for Human Well-being
Tim Althoff | Stanford University

2018-03-26, 10:30 - 11:30
Saarbrücken building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The popularity of wearable and mobile devices, including smartphones and smartwatches, has generated an explosion of detailed behavioral data. These massive digital traces provide us with an unparalleled opportunity to realize new types of scientific approaches that provide novel insights about our lives, health, and happiness. However, gaining valuable insights from these data requires new computational approaches that turn observational, scientifically "weak" data into strong scientific results and can computationally test domain theories at scale. In this talk, I will describe novel computational methods that leverage digital activity traces at the scale of billions of actions taken by millions of people. These methods combine insights from data mining, social network analysis, and natural language processing to generate actionable insights about our physical and mental well-being. Specifically, I will describe how massive digital activity traces reveal unknown health inequality around the world, and how personalized predictive models can target personalized interventions to combat this inequality. I will demonstrate that modelling how fast we are using search engines enables new types of insights into sleep and cognitive performance. Further, I will describe how natural language processing methods can help improve counseling services for millions of people in crisis. I will conclude the talk by sketching interesting future directions for computational approaches that leverage digital activity traces to better understand and improve human well-being.

Speaker's bio:

Tim Althoff is a Ph.D. candidate in Computer Science in the Infolab at Stanford University advised by Jure Leskovec. His research advances computational methods to improve human well-being, combining techniques from Data Mining, Social Network Analysis, and Natural Language Processing. Prior to his PhD, Tim obtained M.S. and B.S. degrees from Stanford University and University of Kaiserslautern, Germany. He has received several fellowships and awards including the SAP Stanford Graduate Fellowship, Fulbright scholarship, German Academic Exchange Service scholarship, the German National Merit Foundation scholarship, and a Best Paper Award by the International Medical Informatics Association. Tim’s research has been covered internationally by news outlets including BBC, CNN, The Economist, The Wall Street Journal, and The New York Times.



Characterizing the Space of Adversarial Examples in Machine Learning
Nicolas Papernot | Pennsylvania State University

2018-03-22, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, I explore the threat model space of ML algorithms, and systematically explore the vulnerabilities resulting from the poor generalization of ML models when they are presented with inputs manipulated by adversaries. This characterization of the threat space prompts an investigation of defenses that exploit the lack of reliable confidence estimates for predictions made. In particular, we introduce a promising new approach to defensive measures tailored to the structure of deep learning. Through this research, we expose connections between the resilience of ML to adversaries, model interpretability, and training data privacy.

Speaker's bio:

Nicolas Papernot is a PhD student in Computer Science and Engineering working with Professor Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning. He is supported by a Google PhD Fellowship in Security and received a best paper award at ICLR 2017. He is also the co-author of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings. In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences from the Ecole Centrale de Lyon.



Catch-22: Isolation and Efficiency in Datacenters
Mihir Nanavati | UBC

2018-03-19, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The datacenters behind popular cloud services are extremely resource-dense. A typical deployment has thousands of cores, terabytes of memory, gigabits of bandwidth, and petabytes of storage available per-rack. Datacenter economics require that providers share these resources across multiple customers for efficient utilization and as a means of providing price-competitive offerings. Shared infrastructure, however, risks cross-client interference and can result in degraded performance or data leaks, leading to outages and breaches. My work explores this tension with systems that provide security and performance isolation on shared hardware, while enabling efficient utilization and preserving the underlying performance of the devices.

This talk describes two such systems dealing with different resources: the first, Plastic, transparently mitigates poor scalability on multi-core systems caused by insufficient cache line isolation, which results in unnecessary memory contention and wasted compute cycles. Another one, Decibel, provides isolation in shared non-volatile storage and allows clients to remotely access high-speed devices at latencies comparable to local devices while guaranteeing throughput, even in the face of competing workloads.

Speaker's bio:

Mihir is a PhD candidate at the University of British Columbia, working with Andy Warfield and Bill Aiello. He is broadly interested in systems and has worked on multi-core scalability and performance, and on providing better security and performance isolation in virtualized environments. During his PhD, he also spent a couple of years at Coho Data, working on high-speed storage systems. Prior to graduate school, he worked in the security industry on detecting kernel-level malware and on black-box vulnerability detection for applications.



Static Program Analysis for a Software-Driven Society
Dr. Caterina Urban | ETH Zurich

2018-03-15, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

As we rely more and more on computer software for automating processes and making decisions, the range of software that is considered critical goes well beyond the avionics and nuclear power industries: nowadays software plays an increasingly important role in convicting criminals, trading on the financial markets, autonomously driving cars, and performing medical diagnoses, to name a few applications. It is now paramount to ensure the reliability and security of such software, and expectations about software fairness and transparency are rapidly rising. To meet these needs, we need new mathematical models of software behavior that capture the aspects relevant for a particular dependability property, and new algorithmic approaches to effectively navigate this mathematical space and decide whether the software behaves as desired. This talk gives an overview of the steps I have taken towards addressing these challenges. Starting from a series of works on deciding software termination, I show that the insights from this domain are transferable to other formal methods and properties. These results pave the way for a unified framework for deciding increasingly advanced software dependability properties. I discuss the first results that I obtained in this more general direction, which in particular bring new conceptual clarity to the synergies with deciding security properties of software. Finally, I conclude with an outlook to the future and discuss the potential impact of this research on our personal, civic, and economic life.

Speaker's bio:

Caterina Urban is a Postdoctoral Researcher in the Department of Computer Science of ETH Zurich. Her main research interest is the development of methods and tools to enhance the reliability of computer software and to help understanding complex software systems that nowadays permeate our society. Caterina received her Ph.D. from the École Normale Supérieure in Paris where she was advised by Radhia Cousot and Antoine Miné. During her Ph.D., she spent five months as a visiting research scholar at the NASA Ames research center and Carnegie Mellon University Silicon Valley. For her work, she received an ETH Zurich Career Seed Grant, a Gilles Kahn Thesis Award Honorable Mention, and the best paper award at the 25th International Conference on Automated Deduction.



Language Support for Distributed Systems in Scala
Heather Miller | Northeastern University and EPFL

2018-03-12, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Recent years have seen a rise in distributed systems for interactive, large-scale data processing. Cutting-edge systems focus on reducing latency and increasing expressiveness in order to provide an interactive and rich experience to more and varied users coming from emerging fields such as data science. Meanwhile, the languages and runtimes underlying such systems face numerous challenges in the context of the severely demanding needs of these new distributed systems; popular languages and runtimes like Scala and the JVM (a) limit the customizability of fundamental operations like serialization, and (b) expose low-level distribution-related errors to application developers and end users when trying to distribute core language features, such as functions. This talk presents three systems that (a) give more control over these primitives to distributed systems builders thereby enabling important optimizations, and (b) increase the reliability of distributing functions and objects. Theoretical, experimental, and empirical results are used in the validation of our work.

Speaker's bio:

Heather Miller is an Assistant Clinical Professor at Northeastern University’s College of Computer and Information Science in Boston and the Executive Director of the Scala Center at EPFL, where she is also a Research Scientist. She recently completed her PhD in EPFL’s Faculty of Computer and Communication Science where she worked on the now-widespread programming language, Scala. Heather’s research interests are at the intersection of data-centric distributed systems and programming languages, with a focus on transferring her research results into industrial use. She has also taught and led development of several popular MOOCs some 1,000,000 students strong, such as "Big Data Analysis in Scala and Spark" and "Functional Programming Principles in Scala."



Observing and Controlling Distributed Systems with Cross-Cutting Tools
Jonathan Mace | Brown University

2018-03-05, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Distributed systems represent some of the most interesting and successful computing applications in use today, from modern web applications and social networks, to data analytics and large-scale machine learning.  However, it is notoriously difficult to understand, troubleshoot, and enforce distributed systems behaviors, because unlike standalone programs, they lack a central point of visibility and control.  This impacts important tasks like resource management, performance, security, accounting, and more.  In this talk I will outline techniques and abstractions for re-establishing cross-component visibility and control.  I will demonstrate with two cross-cutting tools that I have developed in my research: Retro, which measures resource usage and co-ordinates scheduler parameters to achieve end-to-end performance goals; and Pivot Tracing, which dynamically monitors and correlates metrics across component boundaries.  Together, these tools illustrate some of the common challenges and potential solutions when developing and deploying tools for distributed systems.

Speaker's bio:

Jonathan Mace is a Ph.D. candidate in the Computer Science department at Brown University advised by Professor Rodrigo Fonseca.  His research centers on how to understand and enforce end-to-end behaviors in distributed systems.  During his Ph.D. he was awarded the Facebook Fellowship in Distributed Systems, and he received a Best Paper Award at SOSP for his work on Pivot Tracing.  Jonathan received his undergraduate degree in Mathematics and Computer Science from Oxford University in 2009



Storage mechanisms and finite-state abstractions for software verification
Georg Zetzsche | Université Paris-Diderot

2018-03-01, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

A popular approach to automatic program verification is to come up with an abstraction that reflects pertinent properties of the program. This abstraction is drawn from a class of formal models that is amenable to analysis. In the plethora of existing formal models, the aspects of programs that can be represented faithfully are typically determined by the infinite dimension of its state space, its storage mechanism. A central theme of my recent research is to obtain general insights into how the structure of the storage mechanism affects the analysis of a formal model. In the first part of my talk, I will survey results on an overarching framework of storage mechanisms developed in my doctoral work. It encompasses a range of infinite-state models and permits meaningful characterizations of when a particular method of analysis is applicable. Another current focus of my work concerns finite-state abstractions of infinite-state models. On one hand, these can be over- or under-approximations that are more efficient to analyze than infinite-state systems. On the other hand, they can serve as easy-to-check correctness certificates that are produced instead of yes-or-no answers to a verification task. Thus, the second part of my talk will be concerned with results on computing downward closures and related finite-state abstractions.

Speaker's bio:

Georg Zetzsche is a post-doc at the Institut de Recherche en Informatique Fondamentale (IRIF) in Paris and holds a fellowship of the Fondation Sciences Mathématiques de Paris. Before that, he was a post-doc at Laboratoire Spécification et Vérification (LSV) in Cachan with a fellowship from the German Academic Exchange Service (DAAD). He received his PhD in 2015 from the University of Kaiserslautern. For his doctoral work, he received the Distinguished Dissertation Award of the European Association for Theoretical Computer Science (EATCS).



Fighting Large-scale Internet Abuse
Kevin Borgolte | University of California, Santa Barbara

2018-02-26, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The widespread access to the Internet and the ubiquity of web-based services make it easy to communicate and interact globally. Unfortunately, the software implementing the functionality of these services is often vulnerable to attacks. In turn, an attacker can exploit them to compromise and abuse the services for nefarious purposes. In my research, I aim to better understand, detect, and prevent these attacks.

In this talk, we first look at detecting website defacements, which can inflict significant harm on a website's owner or operator through the loss of sales, the loss in reputation, or because of legal ramifications. Then, we dive into how to automatically identify malware distribution campaigns, which has become a major challenge in today's Internet. Next, we look at how to mitigate the dangers of domain takeover attacks, which give attackers the same capabilities to spread misinformation or malware as vulnerabilities do, but without the actual need for a vulnerability in the affected service. Last, I will conclude by sketching interesting future directions on how to better understand, detect, and prevent Internet abuse.

Speaker's bio:

Kevin Borgolte is a Ph.D. Candidate in Computer Science in the SecLab at the University of California, Santa Barbara. In his research, he advances methods and builds systems to better understand, detect, and prevent large-scale Internet abuse. Prior to his Ph.D. studies, Kevin received a M.Sc. from ETH Zurich and a B.Sc. from the University of Bonn, Germany. He is a member of the Shellphish Capture the Flag team and he won 3rd place at the DARPA Cyber Grand Challenge with Shellphish. Kevin's research has been covered by CNN, The Guardian, WIRED, The Christian Science Monitor, as well as Schneier on Security, and Krebs on Security.



High Performance Data Center TCP Packet Processing
Antoine Kaufmann | University of Washington

2018-02-19, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

TCP is widely used for client-server communication in modern data centers. But TCP packet handling is notoriously CPU intensive, accounting for an increasing fraction of data center processing time. Techniques such as TCP segment offload, kernel bypass, and RDMA are of limited benefit for the typical small, frequent RPCs. These techniques can also compromise protocol agility, resource isolation, overall system reliability, and complicate multi-tenancy.

I propose a unique refactoring of TCP functionality that splits processing between a streamlined fast path for common operations, and an out-of-band slow path. Protocol processing executes in the kernel on dedicated cores that enforce correctness and resource isolation. Applications asynchronously communicate with the kernel through event queues, improving parallelism and cache utilization. I show that my approach can increase RPC throughput by up to 4.1x compared to Linux. The fast-path can be offloaded to a programmable NIC to further improve performance and minimize CPU time for network processing. With hardware offload, data packets are delivered directly from application to application, while the NIC and kernel cooperate to enforce correctness and resource isolation. I show that hardware offload can increase per-core packet throughput by 10.7x compared to the Linux kernel TCP implementation.

Speaker's bio:

Antoine Kaufmann is a Ph.D. candidate in Computer Science and Engineering at the University of Washington, where he is a member of the Computer Systems Lab. Previously, he completed a Master's Degree from the Swiss Federal Institute of Technology (ETH) in Zurich. Antoine's research area is operating systems and networks, and much of his recent work has focused on improving application I/O performance in the data center.



Towards Latency Guarantees in Datacenters
Keon Jang | Google

2018-02-15, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

A large portion of computation is now carried out in datacenter. A single datacenter hosts several 100s or 1000s of applications which share common compute and network resources in a datacenter. Isolating each application's performance, i.e., ensuring that its performance is predictable regardless of the behavior of other applications is essential to developing and deploying data center applications, since otherwise developers need to account for co-located applications, which increases development and deployment complexity. Despite its importance current approaches to performance isolation are incomplete, and focus mostly on isolating computational resources. In this talk I present two schemes for isolation network performance. The first, Silo, takes a resource allocation based approach and implements mechanisms for guaranteeing an application's network latency and throughput. The second, Express Pass, takes a congestion control based approach and fairly partitions network resources across applications.  Both approaches require no hardware (ASIC) changes, and can be deployed in today's datacenter. This shows that full application performance isolation is achievable today.

Speaker's bio:

Keon Jang is a Software Engineer at Google Networking Infrastructure team. Previously he was a research scientist at Intel, and before that he was a postdoctoral researcher at Microsoft Research Cambridge where he worked with Hitesh Ballani. Keon got his PhD from KAIST in Daejon, South Korea where he was advised by Sue Moon and KyoungSoo Park. His current research focuses on datacenter networks.



Fast Distributed Optimization Algorithms via Low-Congestion Shortcuts
Bernhard Häupler | Carnegie Mellon University

2018-02-13, 15:15 - 16:15
Saarbrücken building E1 4, room 024

Abstract:

Whether or not a distributed optimization problem, such as MST, shortest path, or min-cut, can be solved fast in a given network depends in a highly non-trivial manner on the network's topology. While there are pathological worst-case n-node topologies on which any of these optimization problems require Omega(sqrt(n)) rounds to compute, despite a small diameter D, most networks allow for fast O(D polylog n) round distributed algorithms. This talk will introduce the low-congestion shortcuts framework which allows to study these dependencies and makes it easy to design algorithms that, on networks of interest, are provably fast and often near instance optimal.

This is joint work with Mohsen Ghaffari, Goran Zuzic, Taisuke Izumi, Ellis Hershkowitz, David Wajc, and Jason Li.

Speaker's bio:

Bernhard Haeupler is an Assistant Professor at the Computer Science Department of Carnegie Mellon University. He received his PhD and MSc in Computer Science from MIT, and a BSc, MSc and Diploma in (Applied) Mathematics from the Technical University of Munich. He has (co-)authored over 70 publications and won several awards for his research, including STOC and SODA best student paper awards, the 2014 ACM-EATCS Doctoral Dissertation Award of Distributed Computing and the NSF CAREER award. His research interests lie in the intersection of classical algorithm design, distributed computing, and coding theory.



Liquid Haskell: Usable Language-Based Program Verification
Niki Vazou | University of Maryland

2018-02-12, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Formal verification has been gaining the attention and resources of both the academic and the industrial world since it prevents critical software bugs that cost money, energy, time, and even lives. Yet, software development and formal verification are decoupled, requiring verification experts to prove properties of a template – instead of the actual – implementation ported into verification specific languages. My goal is to bridge formal verification and software development for the programming language Haskell. Haskell is a unique programming language in that it is a general purpose, functional language used for industrial development, but simultaneously it stands at the leading edge of research and teaching welcoming new, experimental, yet useful features.

In this talk I am presenting Liquid Haskell, a refinement type checker in which formal specifications are expressed as a combination of Haskell’s types and expressions and are automatically checked against real Haskell code. This natural integration of specifications in the language, combined with automatic checking, established Liquid Haskell as a usable verifier, enthusiastically accepted by both industrial and academic Haskell users. Recently, I turned Liquid Haskell into a theorem prover, in which arbitrary theorems about Haskell functions would be proved within the language. As a consequence, Liquid Haskell can be used in Haskell courses to teach the principles of mechanized theorem proving.   

Turning a general purpose language into a theorem prover opens up new research questions – e.g., can theorems be used for runtime optimizations of existing real-world applications? – that I plan to explore in the future.

Speaker's bio:

Niki Vazou is a Postdoctoral Fellow at the Programming Languages Group at University of Maryland. She completed her Ph.D at UC San Diego, where together with her supervisor Ranjit Jhala, she developed Liquid Haskell, a formal verifier integrated into the Haskell programming language. During her Ph.D she was an intern at MSR Cambridge, MSR Redmond, and Awake Security, where she collaborated with top Haskell researchers and industrial developers on further expanding the applications and foundations of Liquid Haskell. Niki received the MSR Graduate Fellowship and is a member of the HaskellOrg committee.

Pic: https://nikivazou.github.io/images/me2.jpg



Caribou -- Intelligent Distributed Storage for the Datacenter
Zsolt Istvan | ETH Zurich

2018-02-08, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

In the era of Big Data, datacenter and cloud architectures decouple compute and storage resources from each other for better scalability. While this design choice enables elastic scale-out, it also causes unnecessary data movements. One solution is to push parts of the computation down to storage where data can be filtered more efficiently. Systems that do this are already in use and rely either on regular server machines as storage nodes or on network attached storage devices. Even though the former provide complex computation and rich functionality since there are plenty of conventional cores available to run the offloaded computation, this solution is quite inefficient because of the over-provisioning of computing capacity and the bandwidth mismatches between storage, CPU, and network.  Networked storage devices, on the other hand, are better balanced in terms of bandwidth but at the price of offering very limited options for offloading data processing.

With Caribou, we explore an alternative design that offers rich offloading functionality while matching the available line-rate processing performance of either storage or network. It also does this in a much more efficient package (size, energy consumption) than regular servers. Our FPGA-based prototype system has been designed such that the internal data management logic can saturate the network for most operation mixes, without being over-provisioned. As a result, it can extract and process data from storage at multi-GB/s rate before sending it to the computing nodes, while at the same time offering features such as replication for fault-tolerance.

Caribou has been released as open source. Its modular design and extensible processing pipeline make it a convenient platform for exploring domain-specific processing inside storage nodes.

Speaker's bio:

Zsolt Istvan is a recent PhD graduate of the Systems Group at ETH Zurich. His research looks at using FPGAs in the context of databases and distributed systems, with the goal of building hybrid solutions and specialized accelerators for data intensive tasks. Before graduate school he was a Master's student at ETH Zurich, Switzerland, and a Bachelor's student at the Technical University of Cluj-Napoca, Romania.



Formal Proof of Polynomial-Time Complexity with Quasi-Interpretations
Hugo Férée | University of Kent

2017-11-22, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

Formal reasoning about computational complexity turns out to be quite cumbersome as it often requires to manipulate explicit time bounds for a specific machine model. Implicit Computational Complexity is a research area which provides techniques and complexity classes characterisations to avoid this. We have formally verified the soundness (and part of the completeness) of such a technique − called quasi-interpretations − using the Coq proof assistant. In particular, we turned this into a tool to help guarantee the polynomial complexity of programs (here, term rewriting systems).

Speaker's bio:

-



Toward Data-Driven Education
Rakesh Agrawal | EPFL

2017-11-13, 10:00 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

An educational program of study can be viewed as a knowledge graph consisting of learning units and relationships between them. Such a knowledge graph provides the core data structure for organizing and navigating learning experiences. We address three issues in this talk. First, how can we synthesize the knowledge graph, given a set of concepts to be covered in the study program. Next, how can we use data mining to identify and correct deficiencies in a knowledge graph. Finally, how can we use data mining to form study groups with the goal of maximizing overall learning. We conclude by pointing out some open research problems.

Speaker's bio:

Rakesh Agrawal is the President and Founder of the Data Insights Laboratories, San Jose, USA and a Visiting Professor at EPFL, Lausanne, Switzerland. He is a member of the National Academy of Engineering, both USA and India, a Fellow of ACM, and a Fellow of IEEE. He has been both an IBM Fellow and a Microsoft Fellow. He has also been the Rukmini Visiting Chair Professor at the Indian Institute of Science, Bangalore, India. ACM SIGKDD awarded him its inaugural Innovations Award and ACM SIGMOD the Edgar F. Codd Award. He was named to the Scientific American’s First list of top 50 Scientists. Rakesh has been granted 80+ patents and published 200+ papers, including the 1st and 2nd highest cited in databases and data mining. Five of his papers have received "test-of-time" awards. His papers have received 100,000+ citations. His research formed the nucleus of IBM Intelligent Miner that led the creation of data mining as a new software category. Besides Intelligent Miner, several other commercial products incorporate his work, including IBM DB2 and WebSphere and Microsoft Bing.



StaRVOOrS: Combined Static and Runtime Verification of Object-Oriented Software
Wolfgang Ahrendt | Chalmers University

2017-11-09, 11:00 - 12:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Static verification techniques are used to analyse and prove properties about programs before they are deployed. In particular, deductive verification techniques often work directly on the source code and are used to verify data-oriented properties of all possible executions. In contrast, runtime verification techniques have been extensively used for control-oriented properties, and analyse concrete executions that occur in the deployed program. I present an approach in which data-oriented and control-oriented properties may be stated in a single formalism amenable to both static and dynamic verification techniques. The specification language enhances a control-oriented property language with data-oriented pre/postconditions. We show how such specifications can be analysed using a combination of the deductive verification system KeY and the runtime verification tool LARVA. Verification is performed in two steps: KeY performs fully automated proof attempts, the resulting partial proofs are analysed, and used to optimize the specification for efficient runtime checking.

Speaker's bio:

Wolfgang Ahrendt is associate professor at Chalmers University of Technology in Gothenburg, Sweden. His major interests are software verification, theorem proving, and runtime verification. He one of the people behind the KeY approach and system, and has recently co-edited the book 'Deductive Software Verification - The KeY Book' (LNCS 100001).



Fairer and more accurate, but for whom?
Alexandra Chouldechova | CMU

2017-07-25, 10:30 - 12:00
Saarbrücken building E1 5, room 005 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Complex statistical models are increasingly being used or considered for use in high-stakes decision-making pipelines in domains such as financial services, health care, criminal justice and human services. These models are often investigated as possible improvements over more classical tools such as regression models or human judgement. While the modeling approach may be new, the practice of using some form of risk assessment to inform decisions is not. When determining whether a new model should be adopted, it is essential to be able to compare the proposed model to the existing approach across a range of task-relevant accuracy and fairness metrics. In this talk I will describe a subgroup analysis approach for characterizing how models differ in terms of fairness-related quantities such as racial or gender disparities. I will also talk about an ongoing collaboration with the Allegheny County Department of Health and Human Services on developing and implementing a risk assessment tool for use in child welfare referral screening.

https://arxiv.org/abs/1707.00046

http://www.post-gazette.com/local/region/2017/04/09/Allegheny-County-using-algor ithm-to-assist-in-child-welfare-screening/stories/201701290002

Speaker's bio:

please see http://www.andrew.cmu.edu/user/achoulde/



Decision Making and The Value of Explanation
Kathy Strandburg | NYU Law

2017-07-07, 10:00 - 11:30
Saarbrücken building E1 5, room 005

Abstract:

Much of the policy and legal debate about algorithmic decision-making has focused on issues of accuracy and bias. Equally important, however, is the question of whether algorithmic decisions are understandable by human observers: whether the relationship between algorithmic inputs and outputs can be explained. Explanation has long been deemed a crucial aspect of accountability, particularly in legal contexts. By requiring that powerful actors explain the bases of their decisions — the logic goes — we reduce the risks of error, abuse, and arbitrariness, thus producing more socially desirable decisions. Decision-making processes employing machine learning algorithms complicate this equation. Such approaches promise to refine and improve the accuracy and efficiency of decision-making processes, but the logic and rationale behind each decision often remains opaque to human understanding. Indeed, at a technical level, it is not clear that all algorithms can be made explainable and, at a normative level, it is an open question when and if the costs of making algorithms explainable outweigh the benefits. This presentation will begin to map out some of the issues that must be addressed in determining in what contexts, and under what constraints, machine learning approaches to governmental decision-making are appropriate.

Speaker's bio:

-



Finding Fake News
Giovanni Luca Ciampaglia | Indiana University Network Science Institute

2017-06-26, 10:30 - 12:00
Saarbrücken building E1 5, room 029

Abstract:

Two-thirds of all American adults access the news through social media. But social networks and social media recommendations lead to information bubbles, and personalization and recommendations, by maximizing the click-through rate, lead to ideological polarization. Consequently, rumors, false news, conspiracy theories, and now even fake news sites are an increasingly worrisome phenomena. While media organizations (Snopes.com, PolitiFact, FactCheck.org, et al.) have stepped up their efforts to verify news, political scientists tell us that fact-checking efforts may be ineffective or even counterproductive. To address some of these challenges, researchers at Indiana University are working on an open platform for the automatic tracking of both online fake news and fact-checking on social media. The goal of the platform, named Hoaxy, is to reconstruct the diffusion networks induced by hoaxes and their corrections as they are shared online and spread from person to person.

Speaker's bio:

Giovanni Luca Ciampaglia is an assistant research scientist at the Indiana University Network Science Institute (IUNI). His research interests are in the emerging disciplines of network science and computational social science, with a particular focus on information diffusion on the Internet and social media. At IUNI, he leads various efforts within the Social Network Science Hub. His research has been covered in major news outlets, including the Wall Street Journal, the Economist, Wired, MIT Technology Review, NPR, and CBS News, to cite a few. He holds a Ph.D. in Informatics from the University of Lugano, Switzerland and a M.Sc. (Laurea) from Sapienza University of Rome, Italy.



Synchronization Strings: Optimal Coding for Insertions and Deletions
Prof. Bernhard Haeupler | Carnegie Mellon University

2017-05-10, 11:00 - 11:45
Saarbrücken building E1 5, room 002

Abstract:

This talk will introduce synchronization strings, which provide a novel way to efficiently deal with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to cope with than more commonly considered Hamming errors, i.e., symbol corruptions and erasures. For every eps > 0, synchronization strings allow to index a sequence with an eps^{-O(1)} size alphabet such that one can efficiently transform k synchronization errors into (1 + eps) k Hamming errors. This powerful new technique has many applications. This talk will focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels.

While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: "Channels with synchronization errors . . . are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels with erasures and errors ... our lack of under- standing about channels with synchronization errors is truly remarkable."

A straight forward application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient insdel codes which get arbitrarily close to the optimal rate-distance tradeoff , i.e., for any delta < 1 and eps > 0 we give insdel codes achieving a rate of 1 - delta that efficiently correct a delta - eps fraction of insertions or deletions.

This is joint work with Amirbehshad Shahrasbi.

Speaker's bio:

http://www.cs.cmu.edu/~haeupler/



Quantifying and Reducing Polarization on Social media
Kiran Garimella | Aalto University

2017-05-10, 09:45 - 11:15
Saarbrücken building E1 5, room 005

Abstract:

Social media has brought a revolution on how people get exposed to information and how they are consuming news. Beyond the undoubtedly large number of advantages and capabilities brought by social-media platforms, a point of criticism has been the creation of filter bubbles or echo chambers, caused by social homophily as well as by algorithmic personalisation and recommendation in content delivery. In this talk, I will present the methods we developed to (i) detect and quantify the existence of polarization on social media, (ii) monitor the evolution of polarisation over time, and finally, (iii) devise methods to overcome the effects caused by increased polarization. We build on top of existing studies and ideas from social science with principles from graph theory to design algorithms which are language independent, domain agnostic and scalable to large number of users.

Speaker's bio:

Kiran Garimella is a PhD student at Aalto University. His research focuses on identifying and combating polarization on social media. In general he is interested in making use of large public datasets to understand human behaviour. Prior to starting his PhD, he worked as a Research Engineer at Yahoo Research, QCRI and as an intern at Carnegie Mellon University, LinkedIn and Amazon. His work on reducing polarization received the best student paper award at WSDM’17.



An Effectful Way to Eliminate Addiction to Dependence
Pierre-Marie Pédrot | University of Ljubljana

2017-05-08, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

We define a syntactic monadic translation of type theory, called the weaning translation, that allows for a large range of effects in dependent type theory, such as exceptions, non-termination, non-determinism or writing operation. Through the light of a call-by-push-value decomposition, we explain why the traditional approach fails with type dependency and justify our approach. Crucially, the construction requires that the universe of algebras of the monad forms itself an algebra. The weaning translation applies to a version of the Calculus of Inductive Constructions with a restricted version of dependent elimination, dubbed Baclofen Type Theory, which we conjecture is the proper generic way to mix effects and dependence. This provides the first effectful version of CIC, which can be implemented as a Coq plugin.

Speaker's bio:

-



Intelligent Control Systems
Sebastian Trimpe | Max Planck Institute for Intelligent Systems

2017-05-04, 11:00 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Due to modern computer and data technology, we can today collect, store, process, and share more data than every before. This data revolution opens fundamentally new ways to think about the classical concept of feedback control as a basis for building future (artificial) intelligent systems, which interact with the physical world. In this talk, I will provide an overview of our recent research on intelligent control systems, which leverages machine learning and modern communication networks for control. I will present algorithms that enable systems to (i) autonomously learn from data, (ii) interconnect in cooperative networks, and (iii) use their resources efficiently. Throughout the talk, the developed algorithms and theory are highlighted with experiments on humanoid robots and a self-balancing dynamic sculpture.

Speaker's bio:

Sebastian Trimpe is a Senior Research Scientist and Group Leader at the Max Planck Institute for Intelligent Systems in Tuebingen, Germany, where he leads the Intelligent Control Systems group. Sebastian obtained his Ph.D. (Dr. sc.) degree in 2013 from ETH Zurich with Raffaello DíAndrea at the Institute for Dynamic Systems and Control. Before, he received a B.Sc. degree in General Engineering Science in 2005, a M.Sc. degree (Dipl.-Ing.) in Electrical Engineering in 2007, and an MBA degree in Technology Management in 2007, all from Hamburg University of Technology. In 2007, he was a research scholar at the University of California at Berkeley. Sebastian is recipient of the General Engineering Award for the best undergraduate degree (2005), a scholarship from the German Academic National Foundation (2002-2007), the triennial IFAC World Congress Interactive Paper Prize (2011), and the Klaus Tschira Award for achievements in public understanding of science (2014). 



On Rationality of Nonnegative Matrix Factorization
James Worrell | University of Oxford

2017-05-02, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

The nonnegative rank of a nonnegative m x n matrix M is the smallest number d such that M can be written as the product M = WH of a nonnegative m x d matrix W and a nonnegative d x n matrix H.  The notions of nonnegative rank and nonnegative matrix factorization have a wide variety of applications, including bioinformatics, computer vision, communication complexity, document clustering, and recommender systems. A longstanding open problem is whether, when M is a rational matrix, the factors W and H in a rank decomposition M=WH can always be chosen to be rational.  In this talk we resolve this problem negatively and discuss consequences of this result for the computational complexity of computing nonnegative rank.

This is joint work with Dmitry Chistikov, Stefan Kiefer, Ines Marusic, and Mahsa Shirmohammadi.

Speaker's bio:

James Worrell is a Professor of Computer Science at the University of Oxford.  He currently holds on an EPSRC Established Career Fellowship on the subject of Verification of Linear Dynamical Systems.



Securing enclaves with formal verification
Andrew Baumann | Microsoft Research, Redmond

2017-04-26, 16:00 - 16:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Moore's Law may be slowing, but, perhaps as a result, other measures of processor complexity are only accelerating. In recent years, Intel's architects have turned to an alphabet soup of instruction set extensions such as MPX, SGX, MPK, and CET as a way to sell CPUs through new security features. SGX in particular promises powerful security: user-mode "enclaves" protected against both physical attacks and privileged software adversaries. To achieve this, SGX's designers implemented an isolation mechanism approaching the complexity of an OS microkernel in the ISA, using an inscrutable mix of silicon and microcode. However, while CPU-based security mechanisms are harder to attack, they are also harder to deploy and update, and already a number of significant flaws have been identified. Worse, the availability of new SGX features is dependent on the slowing deployment of new CPUs.

In this talk, I'll describe an alternative approach to providing SGX-like enclaves: decoupling the underlying hardware mechanisms such as memory encryption, address-space isolation and attestation from a privileged software monitor which implements enclaves. The monitor's trustworthiness is guaranteed by a machine-checked proof of both functional correctness and high-level security properties. The ultimate goal is to achieve security that is equivalent or better than SGX while decoupling enclave features from the underlying hardware.

Speaker's bio:

Andrew Baumann is a researcher in the Systems Group at Microsoft Research, Redmond. His research interests include operating systems, distributed systems, and systems security. Recent research highlights include the Barrelfish multikernel OS, Drawbridge LibOS, and Haven trusted cloud platform. He was previously at The University of New South Wales (BE/PhD)and ETH Zurich (postdoc).



Comprehensive deep linking for mobile apps
Oriana Riva | Microsoft Research, Redmond

2017-04-10, 10:30 - 10:30
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 005

Abstract:

Web deep links are instrumental to many fundamental user experiences such as navigating to one web page from another, bookmarking a page, or sharing it with others. Such experiences are not possible with individual pages inside mobile apps, since historically mobile apps did not have links equivalent to web deep links. Mobile deep links, introduced in recent years, still lack many important properties of web deep links. Unlike web links, mobile deep links must be explicitly built into apps by developers, cover a small number of predefined pages, and are defined statically to navigate to a page for a given link, but not to dynamically generate a link for a given page. Another problem with state-of-the-art deep links is that, once exposed, they are hard to discover, thus limiting their usage in both first and third-party experiences.

In this talk, I'll give an overview of two new deep linking mechanisms that address these problems. First, we implemented an application library that transparently tracks data- and UI-event-dependencies of app pages and encodes the information in links to the pages; when a link is invoked, the information is utilized to recreate the target page quickly and accurately. Second, using static and dynamic analysis we prototyped a tool that can automatically discover links that are exposed by an app; in addition, it can discover many links that are not explicitly exposed. The goal is to obtain links to every page in an app automatically and precisely.

Speaker's bio:

Oriana Riva is a researcher at Microsoft Research, Redmond. Prior to joining MSR in 2010, she received her PhD from the University of Helsinki, and was a PostDoc at ETH Zurich. Her research interests revolve around mobile systems, including the programming abstractions, developer tools and cloud infrastructures required to expand their role in the computing world.



Local Reasoning for Concurrency, Distribution and Web Programming
Azalea Raad | Imperial College

2017-04-03, 10:00 - 11:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In this talk I will present my research in developing local reasoning techniques in both concurrent and sequential settings.

On the concurrency side, I’ll present my work on the program logic of CoLoSL (Concurrent Local Subjective Logic) and its application to various fine-grained concurrent algorithms. A key difficulty in verifying concurrent programs is reasoning compositionally about each thread in isolation. CoLoSL is the first program logic to introduce the general composition and framing of interference relations (describing how shared resources may be manipulated by each thread) in the spirit of resource composition and framing in separation logic. This in turn enables local reasoning and allows for more concise specifications and proofs.

I will then present preliminary work on extending CoLoSL to reason about distributed database applications running under the snapshot isolation (SI) consistency model. SI is a commonly used consistency model for transaction processing, implemented by most distributed databases. The existing work focuses on the SI semantics and verification techniques for client-side applications remain unexplored. To fill this gap, I look into extending CoLoSL towards a program logic for client-side reasoning under SI.

On the sequential side, I’ll briefly discuss my work on specification and verification of web programs. My research in this area include: a compositional specification of the DOM (Document Object Model) library in separation logic; integrating this DOM specification with the JaVerT (JavaScript Verification Toolchain) framework for automated DOM/JavaScript client verification; as well as ongoing work towards extending JaVerT to reason about higher-order JavaScript programs.

Speaker's bio:

Azalea is currently a PhD student at Imperial College's Department of Computing, supervised by Philippa Gardner and Sophia Drossopoulou. She is interested in applying formal verification techniques to web technologies, programming languages, and new application domains.



Combining Computing, Communications and Controls in Safety Critical Systems
Professor Richard Murray | Caltech

2017-03-31, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Flight critical subsystems in aerospace vehicles must achieve probability of failure rates of less than 1 failure in 10^9 flight hours (i.e. less than 1 failure per 100,000 years of operation).  Systems that achieve this level of reliability are hard to design, hard to verify, and hard to validate, especially if software is involved.  In this talk, I will talk about some of the challenges that the aerospace community faces in designing systems with this level of reliability and how tools from formal methods and control theory might help.  I will also describe some of the my group’s work in synthesis of reactive protocols for hybrid systems and its applications to design of safety critical systems.

Speaker's bio:

Richard M. Murray received the B.S. degree in Electrical Engineering from California Institute of Technology in 1985 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 1988 and 1991, respectively. He is currently the Thomas E. and Doris Everhart Professor of Control & Dynamical Systems and Bioengineering at Caltech. Murray's research is in the application of feedback and control to networked systems, with applications in biology and autonomy. Current projects include analysis and design biomolecular feedback circuits, synthesis of discrete decision-making protocols for reactive systems, and design of highly resilient architectures for autonomous systems.



Hardening cloud and datacenter systems against configuration errors
Tianyin Xu | University of California

2017-03-22, 10:30 - 10:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Configuration errors are among the dominant causes of service-wide, catastrophic failures in today's cloud and datacenter systems. Despite the wide adoption of fault-tolerance and recovery techniques, these large-scale software systems still fail to effectively deal with configuration errors. In fact, even tolerance/recovery mechanisms are often misconfigured and thus crippled in reality.

In this talk, I will present our research efforts towards hardening cloud and datacenter systems against configuration errors. I will start with work that seeks for understanding the fundamental causes of misconfigurations. I will then focus on two of my approaches, PCheck and Spex, that enable software systems to anticipate and defend against configuration errors. PCheck generates checking code to help systems detect configuration errors early, and Spex exposes bad system reactions to configuration errors based on constraints inferred from source code.

Speaker's bio:

Tianyin Xu is a Ph.D. candidate in Computer Science and Engineering at University of California, San Diego. His research interests intersect systems, software engineering, and HCI towards the overarching goal of building reliable and secure systems. His dissertation work has impacted the configuration design and implementation of real-world commercial and open-source systems, and has received a Best Paper Award at OSDI 2016.



A New Approach to Network Functions
Aurojit Panda | University of Calefornia, Berkely,

2017-03-17, 10:00 - 11:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Modern networks do far more than  just deliver packets, and provide network functions -- including  firewalls, caches, and WAN optimizers — that are crucial for scaling networks, ensuring security and enabling new applications. Network functions were traditionally implemented using dedicated hardware middleboxes, but in recent years they are increasingly being deployed as VMs on commodity servers. While many herald this move towards network function virtualization (NFV) as a great step forward, I argue that accepted virtualization techniques are ill-suited to network functions. In this talk I describe NetBricks — a new approach to building and running virtualized network functions that speeds development and increases performance. I end the talk by discussing the implications of being able to easily create and insert new network functions. 

Speaker's bio:

Aurojit Panda is a PhD candidate in Computer Science at the University of California Berkeley, where he is advised by Scott Shenker . His work spans programming languages, networking and systems, and his recent work has investigated network verification, consensus algorithms in software defined networks and frameworks for building network functions. He has received best paper awards at SIGCOMM and EuroSys, and was previously awarded a Qualcomm Innovation Fellowship.



Event time series analysis and its applications to social media analysis
Ryota Kobayashi | National Institute of Informatics, Japan

2017-03-14, 14:09 - 15:39
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

We first present an approach for analyzing the event time series, namely, the times of event occurrences. Interestingly, the event time series appears in various fields, including neuroscience (action potentials of a neuron), social media analysis (Twitter, Facebook), and so on. Then, we develop a framework for forecastingretweet activity through the analysis of Twitter dataset(Kobayashi & Lambiotte, ICWSM, 2016). This work is joint work with Renaud Lambiotte.

Speaker's bio:

Dr. Kobayashi is an assistant professor at National Institute of Informatics in Japan. His research interests include mathematical modeling of human activity and computational neuroscience. He received EPFL award and INCF (International Neuroinformatics Coordinating Facility) prize for the development of single neuron model.



Privacy as a Service
Raymond Cheng | University of Washington

2017-03-13, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Current cloud services are vulnerable to hacks and surveillance programs that undermine user privacy and personal liberties. In this talk, I present how we can build practical systems for protecting user privacy from powerful persistent threats. I will discuss Talek, a private publish-subscribe protocol. With Talek, application developers can send and synchronize data through the cloud without revealing any information about data contents or communication patterns of application users. The protocol is designed with provable security guarantees and practical performance, 3-4 orders of magnitude better throughput than other systems with comparable security goals. I will also discuss Radiatus, a security-focused web framework to protect web apps against external intrusions, and uProxy, a Internet censorship circumvention tool in deployment today.

Speaker's bio:

Raymond Cheng is a PhD student working with Thomas Anderson and Arvind Krishnamurthy at the University of Washington. Previously, he spent several years conducting security research in the US government. Raymond's research area is in building practical systems for security and privacy. He has published 6 papers in top systems conferences, including OSDI, Eurosys, and SOCC. In addition, Raymond has been an invited speaker at universities and research labs, such as Stanford, Microsoft Research, Google, and Palantir.



Randomized Algorithms Meets Formal Verification
Justin Hsu | University of Pennsylvania

2017-03-08, 10:00 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Algorithms and formal verification are two classical areas of computer science. The two fields apply rigorous mathematical proof to seemingly disparate ends---on the one hand, analyzing computational efficiency of algorithms; on the other, designing techniques to mechanically show that programs are correct.

In this talk, I will present a surprising confluence of ideas from these two areas. First, I will show how coupling proofs, used to analyze random walks and Markov chains, correspond to proofs in the program logic pRHL (probabilistic Relational Hoare Logic). This connection enables formal verification of novel probabilistic properties, and provides an structured understanding of proofs by coupling. Then, I will show how an approximate version of pRHL, called apRHL, points to a new, approximate version of couplings closely related to differential privacy. The corresponding proof technique---proof by approximate coupling---enables cleaner proofs of differential privacy, both for humans and for formal verification. Finally, I will share some directions towards a possible "Theory AB", blending ideas from both worlds.

Speaker's bio:

Justin Hsu is a final year graduate student in Computer Science at the University of Pennsylvania. He obtained his undergraduate degree in Mathematics from Stanford University. His research interests span formal verification and theoretical computer science, including verification of randomized algorithms, differential privacy, and game theory. He is the recipient of a Simons graduate fellowship in Theoretical Computer Science.



Constrained Counting and Sampling: Bridging the Gap between Theory and Practice
Kuldeep Meel | Rice University

2017-03-07, 10:00 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, privacy, probabilistic reasoning, and constrained-random verification. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints. In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints. In this talk, I will introduce a novel algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in Boolean reasoning over the past two decades. This has allowed us to obtain breakthrough results in constrained sampling and counting, providing a new algorithmic toolbox in machine learning, probabilistic reasoning, privacy, and design verification. I will demonstrate the utility of the above techniques on various real applications including probabilistic inference, design verification and our ongoing collaboration in estimating the reliability of critical infrastructure networks during natural disasters.

Speaker's bio:

Kuldeep Meel is a final year PhD candidate in Rice University working with Prof. Moshe Vardi and Prof. Supratik Chakraborty. His research broadly lies at the intersection of artificial intelligence and formal methods. He is the recipient of a 2016-17 IBM PhD Fellowship, the 2016-17 Lodieska Stockbridge Vaughn Fellowship and the 2013-14 Andrew Ladd Fellowship. His research won the best student paper award at the International Conference on Constraint Programming 2015. He obtained a B.Tech. from IIT Bombay and an M.S. from Rice in 2012 and 2014 respectively. He co-won the 2014 Vienna Center of Logic and Algorithms International Outstanding Masters thesis award.



Variational Bayes In Private Settings
Mijung Park | Amsterdam Machine Learning Lab

2017-03-03, 10:00 - 10:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Bayesian methods are frequently used for analysing privacy-sensitive datasets, including medical records, emails, and educational data, and there is a growing need for practical Bayesian inference algorithms that protect the privacy of individuals' data. To this end, we provide a general framework for privacy-preserving variational Bayes (VB) for a large class of probabilistic models, called the conjugate exponential (CE) family. Our primary observation is that when models are in the CE family, we can privatise the variational posterior distributions simply by perturbing the expected sufficient statistics of the complete-data likelihood. For widely used non-CE models with binomial likelihoods (e.g., logistic regression), we exploit the Polya-Gamma data augmentation scheme to bring such models into the CE family, such that inferences in the modified model resemble the original (private) variational Bayes algorithm as closely as possible. The iterative nature of variational Bayes presents a further challenge for privacy preservation, as each iteration increases the amount of noise needed. We overcome this challenge by combining: (1) a relaxed notion of differential privacy, called concentrated differential privacy, which provides a tight bound on the privacy cost of multiple VB iterations and thus significantly decreases the amount of additive noise; and (2) the privacy amplification effect of subsampling mini-batches from large-scale data in stochastic learning. We empirically demonstrate the effectiveness of our method in CE and non-CE models including latent Dirichlet allocation (LDA), Bayesian logistic regression, and Sigmoid Belief Networks (SBNs), evaluated on real-world datasets. >

Speaker's bio:

Mijung Park completed her Ph.D. in the department of Electrical and Computer Engineering under the supervision of Prof. Jonathan Pillow (now at Princeton University) and Prof. Alan Bovik at The University of Texas at Austin. She worked with Prof. Maneesh Sahani as a postdoc at the Gatsby computational neuroscience unit at University College London. Currently, she works with Prof. Max Welling as a postdoc in the informatics institute at University of Amsterdam. Her research focuses on developing practical algorithms for privacy preserving data analysis. Previously, she worked on a broad range of topics including approximate Bayesian computation (ABC), probabilistic manifold learning, active learning for drug combinations and neurophysiology experiments, and Bayesian structure learning for sparse and smooth high dimensional parameters.



Learning With and From People
Adish Singla | ETH Zürich

2017-02-28, 10:00 - 10:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

People are becoming an integral part of computational systems, fueled primarily by recent technological advancements as well as deep-seated economic and societal changes. Consequently, there is a pressing need to design new data science and machine learning frameworks that can tackle challenges arising from human participation (e.g. questions about incentives and users’ privacy) and can leverage people’s capabilities (e.g. ability to learn).

In this talk, I will share my research efforts at the confluence of people and computing to address real-world problems. Specifically, I will focus on collaborative consumption systems (e.g. shared mobility systems and sharing economy marketplaces like Airbnb) and showcase the need to actively engage users for shaping the demand who would otherwise act primarily in their own interest. The main idea of engaging users is to incentivize them to switch to alternate choices that would improve the system’s effectiveness. To offer optimized incentives, I will present novel multi-armed bandit algorithms and online learning methods in structured spaces for learning users’ costs for switching between different pairs of available choices. Furthermore, to tackle the challenges of data sparsity and to speed up learning, I will introduce hemimetrics as a structural constraint over users’ preferences. I will show experimental results of applying the proposed algorithms on two real-world applications: incentivizing users to explore unreviewed hosts on services like Airbnb and tackling the imbalance problem in bike sharing systems. In collaboration with an ETH Zurich spinoff and a public transport operator in the city of Mainz, Germany, we deployed these algorithms via a smartphone app among users of a bike sharing system. I will share the findings from this deployment.

Speaker's bio:

Adish Singla is a PhD student in the Learning and Adaptive Systems Group at ETH Zurich. His research focuses on designing new machine learning frameworks and developing algorithmic techniques, particularly for situations where people are an integral part of computational systems. Before starting his PhD, he worked as a Senior Development Lead in Bing Search for over three years. He is a recipient of the Facebook Fellowship in the area of Machine Learning, Microsoft Research Tech Transfer Award, and Microsoft Gold Star Award.



A New Verified Compiler Backend for CakeML
Magnus Myreen | Chalmers University of Technology, Göteborg

2017-02-22, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

The CakeML project has recently produced a verified compiler which we believe to be the most realistic verified compiler for a functional programming language to date. In this talk I'll give an overview of the CakeML project with focus on the new compiler, in particular how the compiler is structured, how the intermediate languages are designed and how the proofs are carried out. The talk will stay at a fairly high-level, but I am happy to dive into details for any of the parts that I know well.

The CakeML project is currently a collaboration between six sites across three continents. The new compiler is due to: Anthony Fox (Cambridge, UK), Ramana Kumar (Data61, Sydney Australia), Magnus Myreen (Chalmers, Sweden), Michael Norrish (Data61, Canberra Australia), Scott Owens (Kent, UK), and Yong Kiam Tan (CMU, USA).

Speaker's bio:

I grew up in Finland, but did my undergraduate studies in Oxford UK where Dr Jeff Sanders was my tutor. I completed my PhD on verification of machine-code programs in 2009 at the University of Cambridge UK supervised by Prof. Mike Gordon. My PhD dissertation was selected as the winner of the BCS Distinguished Dissertation Competition in 2010. In 2012, I became a Royal Society Research Fellow. In 2014, I moved to Chalmers, where I became Associate Professor (tenured) in 2015.



Safe, Real-Time Software Reference Architectures for Cyber-Physical Systems
Renato Mancuso | University of Illinois at Urbana- Champaign

2017-02-21, 10:30 - 10:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

There has been an uptrend in the demand and need for complex Cyber-Physical Systems (CPS), such as self-driving cars, unmanned aerial vehicles (UAVs), and smart manufacturing systems for Industry 4.0. CPS  often need to accurately sense the surrounding environment by using high-bandwidth acoustic, imaging and other types of sensors; and to take coordinated decisions and issue time critical actuation commands. Hence, temporal predictability in sensing, communication, computation, and actuation is a fundamental attribute. Additionally, CPS must operate safely even in spite of software and hardware misbehavior to avoid catastrophic failures. To satisfy the increasing demand for performance, modern computing platforms have substantially increased in complexity; for instance, multi-core systems are now mainstream, and partially re-programmable system-on-chip (SoC) have just entered production. Unfortunately, extensive and unregulated sharing of hardware resources directly undermines the ability of guaranteeing strong temporal determinism in modern computing platforms. Novel software architectures are needed to restore temporal correctness of complex CPS when using these platforms. My research vision is to design and implement software architectures that can serve as a reference for the development of high-performance CPS, and that embody two main requirements: temporal predictability and robustness. In this talk, I will address the following questions concerning modern multi-core systems: Why application timing can be highly unpredictable? What techniques can be used to enforce safe temporal behaviors on multi-core platforms? I will also illustrate possible approaches for time-aware fault tolerance to maximize CPS functional safety. Finally, I will review the challenges faced by the embedded industry when trying to adopt emerging computing platforms, and I will highlight some novel directions that can be followed to accomplish my research vision.

Speaker's bio:

Renato Mancuso is a doctoral candidate in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He is interested in high-performance cyber-physical systems, with a specific focus on techniques to enforce strong performance isolation and temporal predictability in multi-core systems. He has published around 20 papers in major conferences and journals. His papers were awarded a best student paper award and a best presentation award at the Real-Time and Embedded Technology and Applications Symposium (RTAS) in 2013 and 2016, respectively. He was the recipient of a Computer Science Excellence Fellowship, and a finalist for the Qualcomm Innovation Fellowship. Some of the design principles for real-time multi-core computing proposed in his research have been officially incorporated in recent certification guidelines for avionics systems. They have also been endorsed by government agencies, industries and research institutions worldwide. He received a B.S. in Computer Engineering with honors (2009) and a M.S. in Computer Engineering with honors (2012) from the University of Rome "Tor Vergata".



Computational fair division and mechanism design
Simina Branzei | Hebrew University of Jerusalem

2017-02-20, 10:00 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The emergence of online platforms has brought about a fundamental shift in economic thinking: the design of economic systems is now a problem that computer science can tackle. For the first time we are able to move from the study of economic systems as natural systems to carefully designing and executing them on a computer. Prominent examples of digital market mechanisms include auctions for ads (run by companies such as Google) and electromagnetic spectrum (used by the US government). I will discuss several recent developments in fair division and mechanism design. I will start with a dictatorship theorem for fair division (cake cutting), showing that requiring truthfulness gives rise to a dictator. Afterwards, I will discuss the theme of simplicity and complexity in mechanism design, and more generally the interplay between economics and computation and learning.

Speaker's bio:

Simina Branzei is an I-CORE postdoctoral fellow at the Hebrew University of Jerusalem, specializing in the area of Economics and Computation. Her research has been published at top conferences in artificial intelligence such as AAAI and IJCAI, and she received multiple awards, such as the Simons-Berkeley fellowship, the IBM Ph.D. fellowship, and the Google Anita Borg Memorial scholarship. She completed a Ph.D. at Aarhus University, Denmark, M.Math from the University of Waterloo, Canada, and held visiting positions at Tsinghua University, China, and Carnegie Mellon University.



Adventures in Systems Reliability: Replication and Replay
Ali Mashtizadeh | Stanford University

2017-02-17, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The past decade has seen a rapid acceleration in the development of new and transformative applications in many areas including transportation, medicine, finance, and communication. Most of these applications are made possible by the increasing diversity and scale of hardware and software systems.

While this brings unprecedented opportunity, it also increases the probability of failures and the difficulty of diagnosing them. Increased scale and transience has also made management increasingly challenging. Devices can come and go for a variety of reasons including mobility, failure and recovery, and scaling capacity to meet demand.

In this talk, I will be presenting several systems that I built to address the resulting challenges to reliability, management, and security.

Ori is a reliable distributed file system for devices at the network edge. Ori automates many of the tasks of storage reliability and recovery through replication, taking advantage of fast LANs and low cost local storage in edge networks.

Castor is record/replay system for multi-core applications with predictable and consistently low overheads. This makes it practical to leave record/replay on in production systems, to reproduce difficult bugs when they occur, and to support recovering from hardware failures through fault tolerance.

Cryptographic CFI (CCFI) is a dynamic approach to control flow integrity. Unlike previous CFI systems that rely purely on static analysis, CCFI can classify pointers based on dynamic and runtime characteristics. This limits the attacks to only actively used code paths, resulting in a substantially smaller attack surface.

Speaker's bio:

Ali is currently completing his PhD at Stanford University where he is advised by Prof. David Mazières. His work focuses on improving reliability, ease of management and security in operating systems and distributed systems. Previously, he was a Staff Engineer at VMware, Inc. working as the technical lead for the live migration products. Ali received an M.Eng. in electrical engineering and computer science and a B.S. in electrical engineering from the Massachusetts Institute of Technology.



Type-Driven Program Synthesis
Nadia Polikarpova | MIT CSAIL, Cambridge USA

2017-02-15, 10:00 - 10:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Modern programming languages safeguard developers from many typical errors, yet more subtle errors—such as violations of security policies—still plague software. Program synthesis has the potential to eliminate such errors, by generating executable code from concise and intuitive high-level specifications. Traditionally, program synthesis failed to scale to specifications that encode complex behavioral properties of software: these properties are notoriously hard to check even for a given program, and so it’s not surprising that finding the right program within a large space of candidates has been considered very challenging. My work tackles this challenge through the design of synthesis-friendly program verification mechanisms, which are able to check a large set of candidate programs against a complex specification at once, whereby efficiently pruning the search space.

Based on this principle, I developed Synquid, a program synthesizer that accepts specifications in the form of expressive types and uses a specialized type checker as its underlying verification mechanism. Synquid is the first synthesizer powerful enough to automatically discover provably correct implementations of complex data structure manipulations, such as insertion into Red-Black Trees and AVL Trees, and normal-form transformations on propositional formulas. Each of these programs is synthesized in under a minute. Going beyond textbook algorithms, I created a language called Lifty, which uses type-driven synthesis to automatically rewrite programs that violate information flow policies. In our case study, Lifty was able to enforce all required policies in a prototype conference management system.

Speaker's bio:

Nadia Polikarpova is a postdoctoral researcher at the MIT Computer Science and Artificial Intelligence Lab, interested in helping programmers build secure and reliable software. She completed her PhD at ETH Zurich. For her dissertation she developed tools and techniques for automated formal verification of object-oriented libraries, and created the first fully verified general-purpose container library, receiving the Best Paper Award at the International Symposium on Formal Methods. During her doctoral studies, Nadia was an intern at MSR Redmond, where she worked on verifying real-world implementations of security protocols. At MIT, Nadia has been applying formal verification to automate various critical and error-prone programming tasks.



On the Security and Scalability of Proof of Work Blockchains
Arthur Gervais | ETH Zurich

2017-02-08, 10:00 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The security properties of blockchain technology allow for the shifting of trust assumptions, e.g., to remove trusted third parties; they, however, create new challenges for security and scalability, which have not yet been fully understood and that we investigate in this talk. The blockchain’s security, for example, affects the ability of participants to exchange monetary value or to participate in the network communication and the consensus process. Our first contribution provides a quantitative framework to objectively compare the security and performance characteristics of Proof of Work-based blockchains under adversaries with optimal strategies. Our work allows us to increase Bitcoin’s transaction throughput by a factor of ten, given only one parameter change and without deteriorating the security of the underlying blockchain. In our second contribution, we highlight previously unconsidered impacts of the PoW blockchain’s scalability on its security and propose design modifications that are now implemented in the primary Bitcoin client. Because blockchain technology is still in its infancy, we conclude the talk with an outline of future work towards an open, scalable, privacy-preserving and decentralized blockchain.

Speaker's bio:

Arthur Gervais's research interests revolve around the security and privacy of blockchain technology, and he also worked on web privacy. He just defended in December 2016 his Ph.D. in the Institute of Information Security at ETH Zürich. During his Ph.D., he performed a 3-months internship at Intel Labs, Oregon, working on blockchain technology. He obtained his Master degrees from KTH Stockholm (Sweden) and Aalto University (Finland) in 2012. Furthermore, he holds a diplôme d'ingénieur from INSA de Lyon (France) from 2012. His Master's thesis was on the security of industrial control systems (SCADA).



Guiding program analyzers toward unsafe executions
Dr. Maria Christakis | University of Kent

2017-02-06, 10:00 - 10:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Most static program analysis techniques do not fully verify all possible executions of a program. They leave executions unverified when they do not check certain properties, fail to verify properties, or check properties under certain unsound assumptions, such as the absence of arithmetic overflow. In the first part of the talk, I will present a technique to complement partial verification results by automatic test case generation. In contrast to existing work, our technique supports the common case that the verification results are based on unsound assumptions. We annotate programs to reflect which executions have been verified, and under which assumptions. These annotations are then used to guide dynamic symbolic execution toward unverified program executions, leading to smaller and more effective test suites.

In the second part of the talk, I will describe a new program simplification technique, called program trimming. Program trimming is a program pre-processing step to remove execution paths while retaining equi-safety (that is, the generated program has a bug if and only if the original program has a bug). Since many program analyzers are sensitive to the number of execution paths, program trimming has the potential to improve their effectiveness. I will show that program trimming has a considerable positive impact on the effectiveness of two program analysis techniques, abstract interpretation and dynamic symbolic execution.

Speaker's bio:

Maria Christakis is currently a lecturer (assistant professor) in the School of Computing at the University of Kent, England. She was previously a post-doctoral researcher at Microsoft Research Redmond, USA. She received her Ph.D. from the Department of Computer Science of ETH Zurich, Switzerland in the summer of 2015. Maria was awarded with the ETH medal for an outstanding doctoral thesis. She completed her Bachelor's and Master's degrees at the Department of Electrical and Computer Engineering of the National Technical University of Athens, Greece.



Proving Performance Properties of Higher-order Functions with Memoization
Ravi Madhavan | EPFL

2016-12-20, 11:30 - 12:30
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Static verification of performance properties of programs is an important problem that has attracted great deal of research. However, most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk, I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects, for functional Scala programs that may rely on lazy evaluation and memoization. In our system, users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. as "steps <= ? * size(list) + ?", along with other functional properties necessary for establishing the bounds. The system automatically infers values for the holes that will make the templates hold for all executions of the functions. For example, the property that a function converting a propositional formula f into negation-normal form (NNF) takes time linear in the size of f can be expressed in the post-condition of the function using the predicate "steps <= ? * size(f) + ?", where size is a user-defined function counting the number of nodes in the syntax tree of the formula. Using our tool, we have verified asymptotically precise bounds of several algorithms and data structures that rely on complex sharing of higher-order functions and memoization. Our benchmarks include balanced search trees like red-black trees and AVL trees, Okasaki’s constant-time queues, deques, lazy data structures based on numerical representations such as lazy binomial heaps, cyclic streams, and dynamic programming algorithms. Some of the benchmarks have posed serious challenges to automatic as well as manual reasoning. The system is a part of the Leon verifier and can be tried online at "http://leondev.epfl.ch" ("Resource bounds" section).

Speaker's bio:

Ravichandhran Madhavan is a fifth year Ph.D student at EPFL, Switzerland, where he is advised by Prof. Viktor Kuncak. His research interests lie in the areas of programming languages, static program analysis and software verification. Before joining EPFL, he spent a couple of years as a research assistant in the Programming Languages and Tools group of Microsoft Research India, where he developed a static, side-effects analysis for C# programs (seal.codeplex.com). His Ph.D thesis is focused on resource verification of higher-order programs. Website: lara.epfl.ch/~kandhada



Proving Performance Properties of Higher-order Functions with Memoization
Ravi Madhavan | EPFL

2016-12-20, 11:30 - 12:30
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Static verification of performance properties of programs is an important problem that has attracted great deal of research. However, most existing tools infer best-effort upper bounds and hope that they match users expectations. In this talk, I will present a system for specifying and verifying bounds on resources such as the number of evaluation steps and heap-allocated objects, for functional Scala programs that may rely on lazy evaluation and memoization. In our system, users can specify the desired resource bound as a template with numerical holes in the contracts of functions e.g. as "steps <= ? * size(list) + ?", along with other functional properties necessary for establishing the bounds. The system automatically infers values for the holes that will make the templates hold for all executions of the functions.  For example, the property that a function converting a propositional formula f into negation-normal form (NNF)  takes time linear in the size of f can be expressed in the post-condition of the function using the predicate "steps <= ? * size(f) + ?",  where size is a user-defined function counting the number of nodes in the syntax tree of the formula. Using our tool, we have verified asymptotically precise bounds of several algorithms and data structures that rely on complex sharing of higher-order functions and memoization. Our benchmarks include balanced search trees like red-black trees and AVL trees, Okasaki’s constant-time queues, deques, lazy data structures based on numerical representations such as lazy binomial heaps, cyclic streams, and dynamic programming algorithms. Some of the benchmarks have posed serious challenges to automatic as well as manual reasoning. The system is a part of the Leon verifier and can be tried online at "http://leondev.epfl.ch" ("Resource bounds" section).

References:   (a) Symbolic Resource Bound Inference For Functional Programs. Ravichandhran Madhavan and Viktor Kuncak. Computer Aided Verification, CAV 2014 (b) Contract-based Resource Verification for Higher-order Functions with Memoization. Ravichandhran Madhavan, Sumith Kulal and Viktor Kuncak. To appear in POPL 2017

Speaker's bio:

Ravichandhran Madhavan is a fifth year Ph.D student at EPFL, Switzerland, where he is advised by Prof. Viktor Kuncak.  His research interests lie in the areas of programming languages, static program analysis and software verification. Before joining EPFL, he spent a couple of years as a research assistant in the Programming Languages and Tools group of Microsoft Research India, where he developed a static, side-effects analysis for  C# programs (seal.codeplex.com). His Ph.D thesis is focused on resource verification of higher-order programs. Website: lara.epfl.ch/~kandhada



Sustaining the Energy Transition: A Role for Computer Science and Complex Networks
Marco Aiello | Rijksuniversiteit Groningen

2016-11-03, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The energy sector is in the midst of an exciting transition. Driven by new generation technologies and by infrastructure digitalization, the traditional way of transmitting, distributing and using energy is transforming a centralized hierarchical system into a multi-directional open infrastructure. While the vision of Intelligent Energy Networks is appealing and desirable---especially from a sustainability perspective---a number of major challenges remain to be tackled. The loss of centralized control, the intermittent nature of renewable energy sources and the scale of the future digital energy systems are novel situations for power systems infrastructures and consumers that pose reliability and availability threats.

In this talk, I show examples of how Computer Science techniques are having and will have an important role in future energy systems. I focus on electricity as energy vector, and techniques from Service-Oriented Computing and AI Planning. I also present Complex Network theory as a design tool for energy distribution systems. To make things concrete, I will review almost ten years of personal research that include making office buildings energy efficient, homes smarter, and futuristic models for the evolution of power distribution grids to accommodate for multi-directional energy flows with distributed generation and local control.

Speaker's bio:

Marco Aiello is full professor of Distributed Systems at the University of Groningen (RUG), head of the Distributed Systems unit, Founder of the startup SustainableBuildings, and member of the Board of the startup Nerdalize BV. Before joining the RUG he was a Lise Meitner fellow at the Technical University of Vienna (from which he obtained the Habilitation), and assistant professor at the University of Trento. He holds a PhD in Logic from the University of Amsterdam and a MSc in Engineering from the University of Rome La Sapienza, cum Laude.



Multi-Authority ABE: Constructions and Applications
Beverly Li | Hunan University

2016-09-23, 11:00 - 12:00
Saarbrücken building E1 5, room 029

Abstract:

Attribute-based Encryption(ABE) is a form of asymmetric cryptography that allows encryption over labels named "attributes". In an ABE scheme, an "authority" generates public parameters and secrets and assigns attributes (and associated secrets) to users. Data can be encrypted using formulas over attributes; users can decrypt if they have attribute secrets that satisfy the encryption formula.

In this talk, I will discuss an extension to ABE that allows encryption over attributes provided by multiple authorities. Such a scheme enables secure data sharing between otherwise distrusting organizations. I will discuss example scenarios where multi-authority ABE is useful, and describe one new construction of multi-authority ABE scheme named DMA.

In DMA, a data owner is a first class principal: users in the system get attributes in cooperation with the data owner and various authorities. Compared to previous work, DMA does not require a global identity for users, or require the multiple authorities to trust a single central authority. DMA is also immune to collusion attacks mounted by users and authorities.

Speaker's bio:

Beverly Li received her Ph.D. from Shanghai Jiaotong University in 2007, and is an assistant professor at Hunan University. Her research interests are in security, networking, and computer systems.



Useful but ugly games
Ruediger Ehlers | Univ. of Bremen

2016-09-13, 10:30 - 11:30
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The controller synthesis problem for CPS is often reduced to solving omega-regular games with an optional optimization criterion. The criteria commonly used in the literature on omega-regular games are however frequently unsuitable for obtaining high-quality CPS controllers as they are unable to capture many, if not most, real-world optimization objectives. We survey a few such cases and show that such problems can be overcome with more sophisticated optimization criteria. The synthesis problem for them gives rise to ugly games, i.e. games that have complicated definitions but relatively simple solutions.

Speaker's bio:

-



Domain Specific Languages for Verified Software
Damien Zufferey | MIT

2016-08-15, 10:30 - 12:00
Kaiserslautern building G26, room 111

Abstract:

In this talk, I will show how we can harness the synergy between programming languages and verification methods to help programmers build reliable software, prove complex properties about them, and scale verification to industrial projects. First, I will describe P a domain-specific language to write asynchronous event driven code. P isolates the control-structure, or protocol, from the data-processing. This allows us not only to generate efficient code, but also to test it using model checking techniques. P was used to implement and verify the core of the USB device driver stack that ships with Microsoft Windows 8 and later versions. The language abstractions and verification helped building a driver which is both reliable and fast. Then, I will introduce PSync a domain specific language for fault-tolerant distributed algorithms that simplifies the implementation of these algorithms enables automated formal verification, and can be executed efficiently. Fault-tolerant algorithms are notoriously difficult to implement correctly, due to asynchronous communication and faults. PSync provides an high-level abstraction by viewing an asynchronous faulty system as synchronous one with an adversarial environment that simulates faults. We have implemented in PSync several important fault-tolerant distributed algorithms and we compare the implementation of consensus algorithms in PSync against implementations in other languages.

Speaker's bio:

Damien Zufferey is a Postdoctoral researcher in Martin Rinard's group at MIT CSAIL since Octorber 2013. Before moving to MIT, He obtained a PhD at the Institute of Science and Technology Austria (IST Austria) under supervision of Thomas A. Henzinger in September 2013 and a Master in computer science from EPFL in 2009. He is interested in improving software reliability by developing theoretical models, building analysis tools, and giving the programmer the appropriate language constructs. He is particularly interested in the study of complex concurrent systems. His research lies at the intersection of formal methods and programming languages.



Cache-Persistence-Aware Response-Time Analysis for Fixed-Priority Preemptive Systems
Geoffrey Nelissen | CISTER, Porto

2016-08-10, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The existing gap between the processor and main memory operating speeds motivated the introduction of intermediate cache memories to accelerate the average access time to instructions and data accessed by programs running on the processor. The introduction of cache memories in modern computing platforms is the cause of important variations in the execution time of each task, depending on whether the instruction and data it requires are already loaded in the cache or not. Many works have focused on analyzing the impact of preemptions on the worst-case execution time (WCET) and worst-case response time (WCRT) of tasks in preemptive systems. Indeed, the preempted tasks may suffer additional cache misses if its memory blocks are evicted from the cache during the execution of preempting tasks. These evictions cause extra accesses to the main memory, which result in additional delays in the task execution. This extra cost is usually referred to as cache-related preemption delays (CRPDs).

Several approaches use information about the tasks' memory access patterns to bound and incorporate preemption costs into the WCRT analysis. These approaches all result in pessimistic WCRT bounds due to the fact that they do not consider the variation in memory demand for successive instances of a same task. They usually assume that the useful cache content for the task is completely erased between two of its executions. However, in actual systems, successive instances of a task may re-use most of the data and instructions that were already loaded in the cache during previous executions. During this talk, we will discuss the concept of persistent cache blocks from a task WCRT perspective, and will present how it can be used to reduce the pessimism of the WCRT analysis for fixed priority preemptive systems. Then, we will introduce techniques exploiting this notion of cache persistence to pre-configure systems so as to improve their runtime behavior.

Speaker's bio:

Geoffrey Nelissen was born in Brussels, Belgium in 1985. He earned his M.Sc. degree in Electrical Engineering at Université Libre de Bruxelles (ULB), Belgium in 2008. He then worked during four years as a Ph.D. student in the PARTS research unit of ULB. In 2012, he received his Ph.D. degree under the supervision of Professors Joël Goossens and Dragomir Milojevic, on the topic "Efficient Optimal Multiprocessor Scheduling Algorithms for Real-Time Systems". He is currently working at CISTER, Porto, Portugal, as an associate researcher in the area of real-time scheduling, embedded, distributed and safety critical system design and analysis.



The Graph Isomorphism Problem, CANCELLED
Laci Babai | University of Chicago

2016-08-05, 11:00 - 11:45
Saarbrücken building E1 4, room 024

Abstract:

Laci will report about this breakthrough result on graph isomorphism.

The presentation will be in two parts of 45 minutes each.

The first part will be for a general audience.

Speaker's bio:

-



An overview of MSR-I
Chandu Thekkath | Microsoft Research India

2016-08-05, 10:30 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

This talk will briefly cover the overall research agenda of the MSR Lab in Bangalore. We work in many broad areas that of CS including Algorithms, Crypto, Systems, ML, and ICT4D among others.  The talk will cover three ongoing projects to give you a sense of the breadth of our work: The Trusted Cloud, Green Spaces, and 99DOTS. The goal of the Trusted Cloud project is to explore the challenges of keeping client data stored in the Cloud secure without trusting the Cloud operator, and involves research in the disciplines of computer security, programming languages and verification, and hardware.

The Green Spaces project attempts to understand the implications of using TV spectrum to provide ubiquitous internet access in countries like India or Brazil where, unlike the US, there is plenty of unused spectrum that can be tapped. This project involves both questions in CS research as well as policy issues at the national level on spectrum allocation.

The 99DOTS project address the problem that arises when patients do not adhere to medications as prescribed by the doctors. Such non-adherence has severe health consequences to large population of patients in all parts of the world. 99DOTS proposes a novel solution to ensure medication adherence in a very cost way, and is used by the Indian Government for Tuberculosis in all its treatment centers in the country.

Speaker's bio:

Chandu Thekkath, currently head of Microsoft Research India, joined Microsoft in 2001. Microsoft Research India, which began operating in January 2005, conducts basic research in computing and engineering sciences relevant to Microsoft’s business and the global IT community, with a special focus on algorithms, cryptography, security, mobility, networks and systems, multilingual systems, software engineering, machine learning, and the role of technology in socioeconomic development. Thekkath began his career at Microsoft as a Senior Researcher at Microsoft Research Silicon Valley, where he did research in multiple areas: mobile devices, distributed data intensive computing, and large-scale storage systems. He also worked with the Hotmail team as chief architect for the Blue project. Blue went into production use within MSN in mid-2006 and was an early example within Microsoft of a large scale distributed storage system that provided strict read/write guarantees in the presence of disk, machine, and network failures.



Learning-Based Synthesis
Daniel Neider | UCLA

2016-07-25, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Synthesis, the automatic construction of objects related to hard- and software, is one of the great challenges of computer science. Although synthesis problems are impossible to solve in general, learning-based approaches, in which the synthesis of an object is based on learning from examples, have recently been used to build elegant and extremely effective solutions for a large number of difficult problems. Such examples include automatically fixing bugs, translating programs from one language into another, program verification, as well as the generation of high-level code from given specifications.

This talk gives an introduction to learning-based synthesis. First, we develop a generic view on learning-based synthesis, called abstract learning frameworks for synthesis, which introduces a common terminology to compare and contrast learning-based synthesis techniques found in the literature. Then, we present a learning-based program verifier, which can prove the correctness of numeric programs (nearly) automatically, and show how this technique can be modeled as an abstract learning framework for synthesis. During the talk, we present various examples that highlight the power of the learning-based approach to synthesis.

Speaker's bio:

I work as postdoctoral researcher in the ExCAPE project at University of Illinois at Urbana-Champaign and University of California, Los Angeles. I joined ExCAPE in August 2014. I received my Ph.D. from RWTH Aachen University in April 2014, where I worked with Christof Löding and Wolfgang Thomas. My thesis is on Applications of Automata Learning in Verification and Synthesis. During this time, I visited Prof. Madhusudan at University of Illinois at Urbana-Champaign. I graduated from RWTH Aachen University with a Master of Science in Computer Science in November 2007.



Algorithmic fairness: a mathematical perspective
Suresh Venkatasubramanian | University of Utah

2016-07-22, 14:00 - 15:30
Saarbrücken building E1 5, room 029

Abstract:

Machine learning has taken over our world, in more ways than we realize. You might get book recommendations, or an efficient route to your destination, or even a winning strategy for a game of Go. But you might also be admitted to college, granted a loan, or hired for a job based on algorithmically enhanced decision-making. We believe machines are neutral arbiters: cold, calculating entities that always make the right decision, that can see patterns that our human minds can't or won't. But are they? Or is decision-making-by-algorithm a way to amplify, extend and make inscrutable the biases and discrimination that is prevalent in society? To answer these questions, we need to go back — all the way to the original ideas of justice and fairness in society. We also need to go forward — towards a mathematical framework for talking about justice and fairness in machine learning.

Speaker's bio:

Suresh Venkatasubramanian is an associate professor in the School of Computing at the University of Utah. He did his Ph.D at Stanford University, and did a stint at AT&T Research before joining the U. His research interests include computational geometry, data mining and machine learning, with special interests in high dimensional geometry, large data algorithms, clustering and kernel methods. He received an NSF CAREER award in 2010. He spends much of his time now thinking about the problem of "algorithmic fairness": how we can ensure that algorithmic decision-making is fair, accountable and transparent. His work has been covered on Science Friday, NBC News, and Gizmodo, as well as in various print outlets.



Framing Dependencies Introduced by Underground Commoditization
Damon McCoy | NYU

2016-07-21, 13:30 - 15:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Internet crime has become increasingly dependent on the underground economy: a loose federation of specialists selling capabilities, services, and resources explicitly tailored to the abuse ecosystem. Through these emerging markets, modern criminal entrepreneurs piece together dozens of à la carte components into entirely new criminal endeavors. In this talk, I'll discuss parts of this ecosystem and show that criminal reliance on this black market introduces fragile dependencies that, if disrupted, undermine entire operations that as a composite appear intractable to protect against.

Speaker's bio:

Prof. McCoy is an Assistant Professor at New York University in the Computer Science and Engineering department. His focus is on empirical measurements of the socio-economics of cyber attackers and security of cyber-physical systems.



Algorithmic Methods in Combinatorial Discrepancy
Nikhil Bansal | Eindhoven University of Technology.

2016-07-14, 10:00 - 11:00
Saarbrücken building E1 4, room 024

Abstract:

Discrepancy theory is widely studied area in combinatorics, and has various applications in computer science to areas such as approximation algorithms, space-time lower bounds, geometric algorithms, numerical integration and so on. In the last few years there has been remarkable progress in our understanding of algorithmic aspects of discrepancy. In addition to leading to efficient algorithms for various problems where only non-constructive proofs were known before, these methods have also led to improved results and various new connections between discrepancy and convex geometry, optimization and probability. In this talk, we will give an overview of some of these developments.

Speaker's bio:

Nikhil Bansal is a Professor in the Department of Mathematics and Computer Science at Eindhoven University of Technology. He did his Bachelors in Computer Science from IIT Mumbai (1999) and obtained his PhD from Carnegie Mellon University in 2003. He worked at the IBM T.J. Watson Research Center until 2011, where he also managed the Algorithms group. He is broadly interested in theoretical computer science with focus on the design and analysis of algorithms. He has received several best paper awards for his work and he is on the editorial boards for several journals. He is also the recipient of a NWO Vidi grant, an ERC consolidator grant and an NWO TOP grant.



Algorithms for the Quantitative Analysis of Infinite-State Systems
Christoph Haase | Laboratoire Spécification et Vérification, Cachan, France

2016-06-20, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Counter automata are an appealing formal model of systems and programs with an unbounded number of states that find a plethora of applications, for instance in the verification of concurrent shared-memory programs. A counter automaton comprises a finite-state controller with a finite number of counters ranging over the natural numbers that can be incremented, decremented and tested for zero when a transition is taken. Despite having been studied since the early days of computer science, many important problems about decidable subclasses of counter automata have remained unsolved. This talk will give an overview over the history and some of the progress on both theoretical and practical aspects of counter automata that has been made over the last two years, focusing on new results concerning reachability, stochastic extensions, practical decision procedures, and the challenges that lie ahead.

This talk is based on joint work with M. Blondin (Montreal), A. Finkel (Cachan), S. Haddad (Cachan), P. Hofman (Cachan), S. Kiefer (Oxford) and M. Lohrey (Siegen).

Speaker's bio:

-



Decision making at scale: Algorithms, Mechanisms, and Platforms
Dr. Ashish Goel | Stanford University

2016-06-09, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism.  Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe two algorithmic approaches towards large scale decision making that we are exploring.

a) Knapsack voting and participatory budgeting: All budget problems are knapsack problems at their heart, since the goal is to pack the largest amount of societal value into a budget. This naturally leads to "knapsack voting" where each voter solves a knapsack problem, or comparison-based voting where each voter compares pairs of projects in terms of benefit-per-dollar. We analyze natural aggregation algorithms for these mechanisms, and show that knapsack voting is strategy-proof. We will also describe our experience with helping implement participatory budgeting in close to two dozen cities and municipalities, and briefly comment on issues of fairness.

b) Triadic consensus: Here, we divide individuals into small groups (say groups of three) and ask them to come to consensus; the results of the triadic deliberations in each round form the input to the next round. We show that this method is efficient and strategy-proof in fairly general settings, whereas no pair-wise deliberation process can have the same properties.

This is joint work with Tanja Aitamurto, Brandon Fain, Anilesh Krishnaswamy, David Lee, Kamesh Munagala, and Sukolsak Sakshuwong.

Speaker's bio:

Ashish Goel is a Professor of Management Science and Engineering and (by courtesy) Computer Science at Stanford University, and a member of Stanford's Institute for Computational and Mathematical Engineering. He received his PhD in Computer Science from Stanford in 1999, and was an Assistant Professor of Computer Science at the University of Southern California from 1999 to 2002. His research interests lie in the design, analysis, and applications of algorithms; current application areas of interest include social networks, Internet commerce, and large scale data processing. Professor Goel is a recipient of an Alfred P. Sloan faculty fellowship (2004-06), a Terman faculty fellowship from Stanford, an NSF Career Award (2002-07), and a Rajeev Motwani mentorship award (2010). He was a co-author on the paper that won the best paper award at WWW 2009, and was a research fellow at Twitter from 2009-14 where he designed and prototyped Twitter's monetization and personalization algorithms. Professor Goel is also Principal Scientist at Teapot, Inc.



Truly Continuous Mobile Sensing for Behaviour Modelling
Cecilia Mascolo | University of Cambridge

2016-05-27, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 002

Abstract:

In this talk I will introduce first  my general research interests which span from mobility modelling and geo-social network analysis to mobile sensing and mobile systems. I will then describe our work in the area of understanding patterns of mobility through sensing using microphone, accelerometer and gyroscope, primarily. I will then describe techniques which help make this energy efficient to achieve continuous sensing on smartphones and wearables. Examples will be drawn from our studies in mental health monitoring, vehicular mobility monitoring and organization analytics.

Speaker's bio:

Cecilia Mascolo is Full Professor of Mobile Systems in the Computer Laboratory, University of Cambridge, UK. Prior joining Cambridge in 2008, she has been a faculty member in the Department of Computer Science at University College London. She holds a PhD from the University of Bologna. Her research interests are in human mobility modelling, mobile and sensor systems and networking and spatio-temporal data analysis. She has published in a number of top tier conferences and journals in the area and her investigator experience spans more than twenty projects funded by Research Councils and industry. She has served as organizing and programme committee member of over fifty mobile, sensor systems and networking conferences and workshops. She sits on the editorial boards of IEEE Pervasive Computing, IEEE Transactions on Mobile Computing and ACM Transactions on Sensor Networks. More details at www.cl.cam.ac.uk/users/cm542



From Proteins to Robots: Learning to Optimize with Confidence
Andreas Krause | ETH Zürich

2016-04-21, 14:30 - 15:30
Saarbrücken building E1 4, room 024

Abstract:

With the success of machine learning, we increasingly see learning algorithms ma ke decisions in the real world. Often, however, this is in stark contrast to the  classical train- test paradigm, since the learning algorithm affects the very data it must operat e on.  I will explain how predictive confidence bounds can guide data acquisition in a principled way to make effecti ve decisions in a variety of complex settings.  I will present algorithms with performance guarantees relying on the notion of submodularity, a natural notion of diminishing returns.  I will also discuss several applications, ranging from autonomously guiding wetlab experimen ts in protein structure optimization, to safe automatic parameter tuning on a ro botic platform.

Speaker's bio:

Andreas Krause is an Associate Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. and M.Sc. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received an ERC Starting Investigator grant, the Deutscher Mustererkennungspreis, an NSF CAREER award, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications as well as the ETH Golden Owl teaching award. His research in learning and adaptive systems that actively acquire information, reason and make decisions in large, distributed and uncertain domains, such as sensor networks and the Web received awards at several premier conferences and journals.



Data-driven Software security: Motivation and Methods
Ulfar Erlingsson | Google

2016-04-20, 13:00 - 14:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

For computer software, our security models, policies, mechanisms, and means of assurance were primarily conceived and developed before the end of the 1970's. However, since that time, software has changed radically: it is thousands of times larger, comprises countless libraries, layers, and services, and is used for more purposes, in far more complex ways. This suggests that we should revisit some of our core computer security concepts. For example, what does the Principle of Least Privilege mean when all software contains libraries that can express arbitrary functionality? And, what security policy should be enforced when software is too complex for either its developers or its users to explain its intended behavior in detail? One possibility is to take an empirical, data-driven approach to modern software, and determine its exact, concrete behavior via comprehensive, online monitoring. Such an approach can be a practical, effective basis for security— as demonstrated by its success in spam and abuse fighting—but its use to constrain software behavior raises many questions. In particular, two questions seem critical. First, is it possible to learn the details of how software *is* behaving, without intruding on the privacy of its users?  Second, are those details a good foundation for deriving security policies that constrain how software *should* behave?  This talk answers both these questions in the affirmative, as part of an overall approach to data-driven security. Specifically, the talk describes techniques for learning detailed software statistics while providing differential privacy for its users, and how deep learning can help derive useful security policies that match users' expectations with intended software behavior. Those techniques are both practical and easy to adopt, and have already been used at scale for billions of users.

Speaker's bio:

Úlfar currently heads a security research team at Google. Previously, he has been a researcher at Microsoft Research, Silicon Valley, an Associate Professor at Reykjavik University, Iceland, and led security technology at two startups: GreenBorder and deCODE Genetics. He holds a PhD in computer science from Cornell University.



Telco Innovation at Home
Dina Papagiannaki | Telefonica, Barcelona

2016-04-14, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Connectivity services have been the focus of tremendous innovation in the recent past. The majority of such innovation, however, has primarily targeted mobile devices, despite the ever growing interest around home services. In this talk I am going to describe different types of innovation that I consider interesting for residential users and why they have or have not succeeded. The fundamental question is "What does it actually take to create interesting, novel user experiences for residential users?". In the talk I am going to focus on constraints, but also opportunities that could create meaningful value added services for the place that we love to call our home.

Speaker's bio:

Konstantina (Dina) Papagiannaki is the Scientific Director, responsible for the Internet, Systems and mobile research carried out by the scientific group at Telefonica Research and Development in Barcelona. Prior to that she was a senior researcher at Intel Labs; from 2004 until the end of 2006 in Cambridge, UK and from 2007 until 2011 in Pittsburgh, USA. From the beginning of 2000 until the end of 2003 she was a member of the IP Group at the Sprint Advanced Technology Labs. She got awarded her PhD from the Computer Science Department of University College London (UCL) in March 2003, receiving the Distinguished Dissertations Award 2003. She got her first degree in Electrical and Computer Engineering at the National Technical University of Athens (NTUA) in October 1998. She has chaired the technical program committee of the premier conferences in her field, authored more than 60 peer reviewed papers, authored a book on the design and management of large-scale IP networks through Cambridge University Press, has 1 pending and 5 awarded patents, and has received the best paper awards at ACM Mobicom 2009, ACM IMC 2013, and ACM CoNEXT 2013. She has held an adjunct faculty position in the Computer Science Department at Carnegie Mellon University from 2007 until 2011, and in 2008 she received the rising star award of the computer networking community of ACM. She has participated as an expert in panels for the Federal Commission of Communications, the National Telecommunications and Information Agency, and the National Science Foundation of the U.S.A, as well as the Association of Computing Machinery.



Sustainable Reliability for Distributed Systems
Manos Kapritsos | Microsoft Research, Redmond

2016-04-11, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Reliability is a first-order concern in modern distributed systems. Even large, well-provisioned systems such as Gmail and Amazon Web Services can be brought down by failures, incurring millions of dollars of cost and hurting company reputation. Such service outages are typically caused by either hardware failures or software bugs. The systems community has developed various techniques for dealing with both kinds of failures (e.g. replication, software testing), but those techniques come at a significant cost. For example, our replication techniques for handling hardware failures are incompatible with multithreaded execution, forcing a stark choice between reliability and performance. As for guarding against software failures, our only real option today is to test our system as best we can and hope we have not missed any subtle bugs. In principle there exists another option, formal verification, that fully addresses this problem, but its overhead in both raw performance and programming effort is considered way too impractical to adopt in real developments.

In this talk, I make the case for Sustainable Reliability, i.e. reliability techniques that provide strong guarantees without imposing unnecessary overhead that limits their practicality. My talk covers the challenges faced by both hardware and software failures and proposes novel techniques in each area. In particular, I will describe how we can reconcile replication and multithreaded execution by rethinking the architecture of replicated systems. The resulting system, Eve, offers an unprecedented combination of strong guarantees and high performance. I will also describe IronFleet, a new methodology that brings formal verification of distributed systems within the realm of practicality. Despite its strong guarantees, IronFleet incurs a very reasonable overhead in both performance and programming effort.

Speaker's bio:

Manos Kapritsos is a Postdoctoral Researcher at Microsoft Research in Redmond, WA. He received his Ph.D. from the University of Texas at Austin in 2014. His research focuses on designing reliable distributed systems, by applying fault-tolerant replication to combat machine failures and using formal verification to ensure software correctness.



Accountability for Distributed Systems
Andreas Haeberlen | University of Pennsylvania

2016-03-31, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Many of our everyday activities are now performed online - whether it is banking, shopping, or chatting with friends. Behind the scenes, these activities are implemented by large distributed systems that often contain machines from several different organizations. Usually, these machines do what we expect them to, but occasionally they 'misbehave' - sometimes by mistake, sometimes to gain an advantage, and sometimes because of a deliberate attack.

In society, accountability is widely used to counter such threats. Accountability incentivizes good performance, exposes problems, and builds trust between competing individuals and organizations. In this talk, I will argue that accountability is also a powerful tool for designing distributed systems. An accountable distributed system ensures that 'misbehavior' can be detected, and that it can be linked to a specific machine via some form of digital evidence. The evidence can then be used just like in the 'offline' world, e.g., to correct the problem and/or to take action against the responsible organizations.

I will give an overview of our progress towards accountable distributed systems, ranging from theoretical foundations and efficient algorithms to practical applications. I will also present one result in detail: a technique that can detect information leaks through covert timing channels.

Speaker's bio:

Andreas Haeberlen is a Raj and Neera Singh Assistant Professor at the University of Pennsylvania. His research interests are in distributed systems, networking, and security. Andreas received his PhD degree in Computer Science from Rice University in 2009; he is the recipient of a NSF CAREER award, and he was awarded the Otto Hahn Medal by the Max Planck Society.



Why applications are still draining our batteries, and how we can help
Aaron Schulman | Stanford University

2016-03-29, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Application developers lack tools to profile and compare the energy consumption of different software designs. This energy-optimization task is challenging because of unpredictable interactions between the application and increasingly complex power management logic. Yet, having accurate power information would allow application developers to both avoid inefficient designs and discover opportunities for new optimizations.

In this talk, I will show that it is possible to accurately measure system-level power and attribute it to application activities. I will present BattOr, a portable, easy-to-use power monitor that provides developers with a profile of the energy consumption of their designs-without modifications to hardware or software. I will show how Google developers are using BattOr to improve Chrome's energy efficiency. I will also show how fine-grained understanding of cellular power at different signal strengths enables novel energy optimizations. Finally, I will describe my future plans to attribute system-level power to individual hardware components and to investigate opportunities presented by instrumenting every server in a data center with fine-grained power monitoring.

Speaker's bio:

Aaron Schulman is a Postdoctoral Scholar at Stanford working with Sachin Katti; he earned his Ph.D. in Computer Science from the University of Maryland, where he was advised by Neil Spring. His research interests are in low-power embedded systems, wireless communication, and network measurement. Aaron’s research on the BattOr power monitor has been funded by Google, is being commercialized by his startup Mellow Research, and is becoming Google’s de facto standard tool for measuring the energy consumption of the Chrome web browser. For his dissertation, Aaron provided the first observations of fundamental factors that limit the reliability of the Internet’s critical last-mile infrastructure. His dissertation was selected to receive the the 2013 ACM SIGCOMM Doctoral Dissertation Award.http://stanford.edu/~aschulm



What's in a Game? An intelligent and adaptive approach to security
Arunesh Sinha | University of Southern California

2016-03-23, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Understanding the complex defender-adversary interaction in any adversarial interaction allows for the design of intelligent and adaptive defense.  Game theory is a natural model for such multi-agent interaction. However, significant challenges need to be overcome in order to apply game theory in practice. In this talk, I will present my work on addressing two such challenges: scalability and learning adversary behavior. First, I will present a game model of screening of passengers at airports and a novel optimization approach based on randomized allocation and disjunctive programming techniques to solve large instances of the problem. Next, I will present an approach that learns adversary behavior and then plans optimal defensive actions, thereby bypassing standard game-theoretic assumptions such as rationality. However, a formal Probably Approximately Correct (PAC) model analysis of the learning module in such an approach reveals possible conditions under which learning followed by optimization can produce sub-optimal results. This emphasizes the need of formal compositional reasoning when using learning in large systems. 

The airport screening work was done in collaboration with the Transport Security Administration in USA. The approach of learning adversary behavior was applied for predictive policing in collaboration with University of Southern California (USC) police, and is being tested on the USC campus.

Speaker's bio:

-



Performance-aware Repair for Concurrent Programs
Arjun Radhakrishna | University of Pennsylvania

2016-03-21, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

We present a recent line of work on automated synthesis of synchronization constructs for concurrent programs. Our techniques are inspired by a study of the most common types of concurrency bugs and bugs-fixes in Linux device-drivers.  As opposed to classical techniques which tend to use expensive synchronization constructs, our technique attempts to use inexpensive program transformations, such as reordering independent statements, to improve the performance of generated fixes.

Our techniques are based on the observation that a large fraction of concurrency bugs are data-independent. This observations allow us to characterize and fix concurrency bugs based only on the order of execution of the statements involved. We evaluated our techniques on several real concurrency bugs that occurred in Linux device drivers, and showed that our synthesis procedure is able to produce more efficient and "programmer-like" bug-fixes.

We finish by talk with a brief note on the general theme of soft specifications, such as performance and energy consumption, in program synthesis. Specifically, we will discuss the use of quantitative specifications and their applications to resource management in embedded and cyber-physical systems.

Speaker's bio:

Arjun Radhakrishna is a post-doctoral researcher at the University of Pennsylvania. Previously, he completed his PhD at the Institute of Science and Technology, Austria advised by Prof. Thomas A. Henzinger. His research focuses primarily on using programming language techniques, specifically, automated program synthesis, for rigorous systems engineering.  His current research interests include the use of alternative specification mechanisms to capture subtle soft requirements on computing systems, such as program performance, energy consumption, or a program's robustness to errors. He is also interested in verification and synthesis of concurrent programs, in particular, device drivers.



Online social interactions: a lens on humans and a world for humans
Chenhao Tan | Cornell University

2016-03-17, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Online social interactions have become an integral part of people's lives, e.g., presidential candidates use Facebook and Twitter to engage with the public, programmers rely on Stackoverflow to write code, and various communities have been forming online. This unprecedented amount of social interaction offers tremendous opportunities to understand human behavior. Such an understanding can induce significant social impact, ranging from influencing election outcomes to better communication for everyone. 

My research leverages newly available massive datasets of social interactions to understand human behavior and predict human decisions. These results can be used to build or improve socio-technical systems. In this talk, I will explain my research at both micro and macro levels. At the micro level, I investigate the effect of wording in message sharing via natural experiments. I develop a classifier that outperforms humans in predicting which tweet will be retweeted more. At the macro level, I examine how users engage with multiple communities and find that, surprisingly, users continually explore new communities on Reddit. Moreover, their exploration patterns in their early ``life'' can be used to predict whether they will eventually abandon Reddit. I will finish with some discussion of future research directions in understanding human behavior.

Speaker's bio:

Chenhao Tan is a Ph.D. Candidate in the Department of Computer Science at Cornell University. He earned Bachelor degrees in Computer Science and in Economics from Tsinghua University. His research spans a wide range of topics in social computing. He has published papers primarily at ACL and WWW, and also at KDD, WSDM, ICWSM, etc. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Facebook fellowship and a Yahoo! Key Scientific Challenges award.



Securing the Internet by Proving the Impossible
Dave Levin | University of Maryland

2016-03-14, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The state of Internet security today is largely reactive, continually raising the defensive bar in response to increasingly sophisticated attackers. In this talk, I will present an alternate approach to building Internet systems that underlies much of my work: instead of reactively working around some attacks, what if we were to make them impossible in the first place?

 I will discuss two primitives my collaborators and I have created that provide small proofs of impossibility, and I will demonstrate   how they can be applied to solve large-scale problems, including censorship resistance, digital currency, and online voting.     First, I will present TrInc, a small piece of trusted hardware that provides proof that an attacker could not have sent conflicting   messages to others.  Second, I will present Alibi Routing, a peer-to-peer system that provides proof that a user's packets could   not have gone through a region of the world the user requested them to forbid. Finally, I will describe some of my ongoing   and future efforts, including securing the Web's public key infrastructure.

Speaker's bio:

Dave Levin is a research scientist and co-chair of the Computer Science Undergraduate Honors program at the University of Maryland.   He previously worked in the Social Computing Group at Hewlett Packard Labs after getting his PhD from UMD in 2010.  His work lies   at the intersection of networking, security, and economics. Dave has received a best paper award at NSDI, and a best reviewer award   from ACM SIGCOMM.



Stamping Out Concurrency Bugs
Baris Kasikci | EPFL

2016-03-10, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The shift to multi-core architectures in the past ten years pushed developers to write concurrent software to leverage hardware parallelism. The transition to multi-core hardware happened at a more rapid pace than the evolution of associated programming techniques and tools, which made it difficult to write concurrent programs that are both efficient and correct. Failures due to concurrency bugs are often hard to reproduce and fix, and can cause significant losses.

In this talk, I will first give an overview of the techniques we developed for the detection, root cause diagnosis, and classification of concurrency bugs. Then, I will discuss how the techniques we developed have been adopted at Microsoft and Intel. I will then discuss in detail Gist, a technique for the root cause diagnosis of failures. Gist uses hybrid static-dynamic program analysis and gathers information from real user executions to isolate root causes of failures. Gist is highly accurate and efficient, even for failures that rarely occur in production. Finally, I will close by describing future work I plan to do toward solving the challenges posed to software systems by emerging technology trends.

Speaker's bio:

Baris Kasikci completed his Ph.D. in the Dependable Systems Laboratory (DSLAB) at EPFL, advised by George Candea. His research is centered around developing techniques, tools, and environments that help developers build more reliable and secure software. He is interested in finding solutions that allow programmers to better reason about their code, and that efficiently detect bugs, classify them, and diagnose their root cause. He especially focuses on bugs that manifest in production, because they are hard and time-consuming. He is also interested in efficient runtime instrumentation, hardware and runtime support for enhancing system security, and program analysis under various memory models.

Baris is one of the four recipients of the VMware 2014-2015 Graduate Fellowship. During his Ph.D., he interned at Microsoft Research, VMware, and Intel. Before starting his Ph.D., he worked as a software engineer for four years, mainly developing real-time embedded systems software. Before joining EPFL, he was working for Siemens Corporate Technology. More details can be found at http://www.bariskasikci.org/.



Human Behavior in Networks
Robert West | Stanford University

2016-03-07, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Humans as well as information are organized in networks. Interacting with these networks is part of our daily lives: we talk to friends in our social network; we find information by navigating the Web; and we form opinions by listening to others and to the media. Thus, understanding, predicting, and enhancing human behavior in networks poses important research problems for computer and data science with practical applications of high impact. In this talk I will present some of my work in this area, focusing on (1) human navigation of information networks and (2) person-to-person opinions in social networks.

Network navigation constitutes a fundamental human behavior: in order to make use of the information and resources around us, we constantly explore, disentangle, and navigate networks such as the Web. Studying navigation patterns lets us understand better how humans reason about complex networks and lets us build more human-friendly information systems. As an example, I will present an algorithm for improving website hyperlink structure by mining raw web server logs. The resulting system is being deployed on Wikipedia's full server logs at terabyte scale, producing links that are clicked 10 times as frequently as the average link added by human Wikipedia editors.

Communication and coordination through natural language is another prominent human network behavior. Studying the interplay of social network structure and language has the potential to benefit both sociolinguistics and natural language processing. Intriguing opportunities and challenges have arisen recently with the advent of online social media, which produce large amounts of both network and natural language data. As an example, I will discuss my work on person-to-person sentiment analysis in social networks, which combines the sociological theory of structural balance with techniques from natural language processing, resulting in a machine learning model for sentiment prediction that clearly outperforms both text-only and network-only versions.

I will conclude the talk by sketching interesting future directions for computational approaches to studying and enhancing human behavior in networks.

Speaker's bio:

Robert West is a sixth-year Ph.D. candidate in Computer Science in the Infolab at Stanford University, advised by Jure Leskovec. His research aims to understand, predict, and enhance human behavior in social and information networks by developing techniques in data science, data mining, network analysis, machine learning, and natural language processing. Previously, he obtained a Master's degree from McGill University in 2010 and a Diplom degree from Technische Universität München in 2007.



Efficient Formally Secure Compilers to a Tagged Architecture
Catalin Hritcu | InRIA, Paris

2016-02-22, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Severe low-level vulnerabilities abound in today's computer systems, allowing cyber-attackers to remotely gain full control. This happens in big part because our programming languages, compilers, and architectures were designed in an era of scarce hardware resources and too often trade off security for efficiency. The semantics of mainstream low-level languages like C is inherently insecure, and even for safer languages, establishing security with respect to a high-level semantics does not guarantee the absence of low-level attacks. Secure compilation using the coarse-grained protection mechanisms provided by mainstream hardware architectures would be too inefficient for most practical scenarios.

In this talk I will present a new project that is aimed at leveraging emerging hardware capabilities for fine-grained protection to build the first, efficient secure compilers for realistic programming languages, both low-level (the C language) and high-level (ML and F*, a dependently-typed variant). These compilers will provide a secure semantics for all programs and will ensure that high-level abstractions cannot be violated even when interacting with untrusted low-level code. To achieve this level of security without sacrificing efficiency, our secure compilers target a novel tagged architecture, which associates a metadata tag to each word and efficiently propagates and checks tags according to software-defined rules. Formally, our goal is full abstraction with respect to a secure high-level semantics. This property is much stronger than just compiler correctness and ensures that no machine-code attacker can do more harm to securely compiled components than a component in the secure source language already could.

Speaker's bio:

Catalin is a tenured Research Scientist at Inria Paris where he develops rigorous formal techniques for solving security problems. He is particularly interested in formal methods for security (memory safety, compartmentalization, access control, integrity, security protocols, information flow), programming languages (type systems verification, proof assistants, property-based testing, semantics, formal metatheory, certified tools, dynamic enforcement), and the design and verification of security-critical systems (reference monitors, secure compilers, microkernels, secure hardware).  He is a developer of the new F* verification system and of several other open source tools based on his research.  Catalin was a PhD student at Saarland University and a Research Associate at University of Pennsylvania before joining Inria Paris in September 2013.



Timing Guarantees for Cyber-Physical Systems
Linh Thi Xuan Phan | University of Pennsylvania

2016-02-15, 10:30 - 10:45
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Cyber-physical systems -- such as cars, pacemakers, and power plants -- need to interact with the physical world in a timely manner to ensure safety. It is important to have a way to analyze these systems and to prove that they can meet their timing requirements. However, modern cyber-physical systems are increasingly complex: they can involve thousands of tasks running on dozens of processors, many of which can have multiple cores or shared caches. Existing techniques for ensuring timing guarantees cannot handle this level of complexity. In this talk, I will present some of my recent work that can help to bridge this gap, such as overhead-aware compositional scheduling and analysis. I will also discuss some potential applications, such as real-time cloud platforms and intrusion-resistant cyber-physical systems.

Speaker's bio:

Linh Thi Xuan Phan is an Assistant Research Professor in the Department of Computer and Information Science at the University of Pennsylvania. Her interests include real-time systems, embedded systems, cyber-physical systems, and cloud computing. Her research develops theoretical foundations and practical tools for building complex systems with provable safety and timing guarantees. She is especially interested in techniques that integrate theory, systems, and application aspects. Recently, she has been working on methods for defending cyber-physical systems against malicious attacks, as well as on real-time cloud infrastructures for safety-critical and mission-critical systems. Linh holds a Ph.D. degree in Computer Science from the National University of Singapore (NUS); she received the Graduate Research Excellence Award from NUS for her dissertation work.



Verasco, a formally verified C static analyzer
Jacques-Henri Jourdan | INRIA

2016-01-06, 13:00 - 14:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 005

Abstract:

This talk will present the design and soundness proof of Verasco, a formally verified static analyzer for most of the ISO C99 language (excluding recursion and dynamic allocation), developed using the Coq proof assistant. Verasco aims at establishing the absence of run-time errors in the analyzed programs. It enjoys a modular architecture that supports the extensible combination of multiple abstract domains, both relational and non-relational. It include a memory abstract domain, an abstract domain of arithmetical symbolic equalities, an abstract domain of intervals, an abstract domain of arithmetical congruences and an octagonal abstract domain.

Verasco integrates with the CompCert formally-verified C compiler so that not only the soundness of the analysis results is guaranteed with mathematical certitude, but also the fact that these guarantees carry over to the compiled code.

Speaker's bio:

-



'Embedded Control Systems --- From Theory to Implementation'
Amir Aminifar | Linköping University

2016-01-05, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Today, many embedded and cyber-physical systems, e.g., automotive, comprise several control applications. Guaranteeing the stability of these control applications is perhaps the most fundamental requirement while implementing such applications. Often, however, the design of such systems is done without considering the implementation impacts. In this case, the guarantees provided at design time might not be preserved for the final implementation. In this talk, we discuss the implementation- aware design of embedded control systems.

Speaker's bio:

-



The CakeML verified compiler
Scott Owens | University of Kent

2015-12-17, 13:00 - 14:30
Saarbrücken building E1 5, room 029

Abstract:

CakeML is a new ML dialect aimed at supporting formally verified programs. The CakeML project has several aspects including formal semantics and metatheory, a verified compiler, a formal connection between its semantics and higher-order logic (in the HOL4 interactive theorem prover), and example verified applications written in CakeML and HOL4. The project is an active collaboration between Ramana Kumar at NICTA, Magnus Myreen at Chalmers, Michael Norrish at NICTA, Yong Kiam Tan at (A*STAR, Singapore), and myself.

In this talk, I will explain the architecture of CakeML's verified compiler, focussing on a new optimising backend that we are currently developing.

CakeML's web site is https://cakeml.org, and development is hosted on GitHub at https://github.com/CakeML/cakeml.

Speaker's bio:

-



Rigorous Acrchitectural Modelling for Production Multiprocessors
Kathy Gray | University of Cambridge

2015-12-16, 10:30 - 12:00
Saarbrücken building E1 5, room 029

Abstract:

Processor architectures are critical interfaces in computing, but they are typically defined only by prose and pseudocode documentation. This is especially problematic for the subtle concurrency behaviour of weakly consistent multiprocessors such as ARM and IBM POWER: the traditional documentation does not define precisely what programmer-observable behaviour is (and is not) allowed for concurrent code; it is not executable as a test oracle for pre-Silicon or post-Silicon hardware testing; it is not executable as an emulator for software testing; and it is not mathematically rigorous enough to serve as a foundation for software verification.

In this talk, I will present a rigorous architectural envelope model for IBM POWER and ARM multiprocessors, that aims to support all of these for small-but-intricate test cases, integrating an operational concurrency model with an ISA model for the sequential behaviour of a substantial fragment of the user-mode instruction set (expressed in a new ISA description language). I will present the interface between the two, whose requirements drove the development of our new language Sail. I will also present the interesting aspects of Sail's dependent type system, which is a light-weight system balancing the benefits of static bounds and effects checking with the usability of the language for engineers. Futher, our models can be automatically translated into executable code, which, combined with front-ends for concurrency litmus tests and ELF executables, can interactively or exhaustively explore all the allowed behaviours of small test cases.

Joint work with: S. Flur, G. Kerneis*, L. Maranget**, D. Mulligan, C. Pulte, S. Sarkar***, and P. Sewell, University of Cambridge (* Google, ** Inria, *** St Andrews)

Speaker's bio:

-



The C standard formalized in Coq
Robbert Krebbers | Aarhus University

2015-12-15, 10:30 - 11:30
Saarbrücken building E1 5, room 029

Abstract:

In my PhD thesis I have developed a formal semantics of a significant part of the C programming language as described by the C11 standard. In this talk I will give a brief overview of the main parts of my formalized semantics.

* A structured memory model based on trees to capture subtleties of C11 that have not been addressed by others.

* A core C language with a small step operational semantics. The operational semantics is non-deterministic due to unspecified expression evaluation order.

* An explicit type system for the core language that enjoys type preservation and progress.

* A type sound translation of actual C programs into the core language.

* An executable semantics that has been proven sound and complete with respect to the operational semantics.

* Extensions of separation logic to reason about subtle constructs in C like non-determinism in expressions, and gotos in the presence of block scope variables.

Speaker's bio:

-



CLOTHO: Saving Programs from Malformed Strings and Incorrect String-Handling
Aritra Dhar | Xerox Research Center India

2015-12-08, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

Software is susceptible to malformed data originating from untrusted sources. Occasionally the programming logic or constructs used are inappropriate to handle the varied constraints imposed by legal and well-formed data. Consequently, software may produce unexpected results or even crash. In this paper, we present \tool, a novel hybrid approach that saves such software from crashing when failures originate from malformed strings or inappropriate handling of strings. Clotho statically analyzes a program to identify statements that are vulnerable to failures related to associated string data. Clotho then generates patches that are likely to satisfy constraints on the data, and in case of failures produces program behavior which would be close to the expected. The precision of the patches is improved with the help of a dynamic analysis. 

We have implemented Clotho for the Java String API, and our evaluation based on several popular open-source libraries shows that Clotho generates patches that are semantically similar to the patches generated by the programmers in the later versions. Additionally, these patches are activated only when a failure is detected, and thus Clotho incurs no runtime overhead during normal execution, and negligible overhead in case of failures.

Speaker's bio:

Aritra is a research engineer at Xerox Research Center India and a prospective PhD student. He has a M.Tech degree from IIIT-Delhi and he is interested in program analysis, crypto currency and wireless sensor networks.



Impact of Multicore on Cyber-Physical Systems: challenges andsolutions
Dr. Marco Caccamo | University of Illinois

2015-11-26, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The benefits of adopting emerging multicore processors include reductions in space, weight, power, and cooling, while increasing CPU bandwidth per processor. However, the existing real-time software engineering process is based on the constant worst case execution time (WCET) assumption, which states that the measured worst case execution time of a software task when executed alone is the same as when that task is running together with other tasks. While this assumption is correct for single-core chips, it is NOT true for multicore chips. As it is now, the interference between cores can cause delay spikes as high as 600% in industry benchmarks. This presentation reviews main challenges faced by the embedded industry today when adopting multicore in safety critical embedded systems. A discussion on the notion of Single Core Equivalence follows.

Speaker's bio:

Caccamo received his Ph.D. in computer engineering from Scuola Superiore Sant’Anna, Pisa, Italy in January 2002 and joined University of Illinois at Urbana-Champaign shortly after graduation, where he is a professor of computer science. He also has a courtesy appointment in the Department of Electrical and Computer Engineering (ECE) at the University of Illinois. In broad terms, his research interests are centered on the area of embedded systems. He has worked in close collaboration with avionics, farming, and automotive industries developing innovative software architectures and toolkits for the design automation of embedded digital controllers, and low-level resource management solutions for real-time operating systems running on multicore architectures. He has authored/coauthored more than 90 refereed publications in real-time and embedded networked computing systems. He has been a guest editor of the Journal of Real-Time Systems and he is the program chair of RTSS'15. He was previously program chair of RTAS and also served as both general chair of RTAS and Cyber Physical Systems Week (CPSWeek’11). He was also awarded an NSF CAREER Award in 2003 and is a senior member of IEEE.



Declarative Programming for Eventual Consistency
Dr. Suresh Jagannathan | Purdue University,

2015-11-19, 10:30 - 10:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

n geo-replicated distributed data stores, the need to ensure responsiveness in the face of network partitions and processor failures results in implementations that provide only weak (so-called eventually consistent) guarantees on when data updated by one process becomes visible to another. Applications must be carefully constructed to be aware of unwanted inconsistencies permitted by such implementations (e.g., having negative balances in a bank account, or having an item appear in a shopping cart after it has been removed), but must balance correctness concerns with performance and scalability needs. Because understanding these tradeoffs requires subtle reasoning and detailed knowledge about the underlying data store, implementing robust distributed applications in such environments is often an error-prone and expensive task.

Speaker's bio:

-



Discrimination Data Analysis
Salvatore Ruggieri | University of Pisa

2015-10-22, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

The collection and analysis of observational and experimental data represent the main tools for assessing the presence, the extent, the nature, and the trend of discrimination phenomena. Data analysis techniques have been proposed in the last fifty years in the economic, legal, statistical, and, recently, in the data mining literature. This is not surprising, since discrimination analysis is a multi-disciplinary problem, involving sociological causes, legal argumentations, economic models, statistical techniques, computational issues. The objective of the talk is to provide first an introduction on concepts, problems, application areas, datasets, methods and approaches from a multidisciplinary perspective; and then to deep in the data-driven approach based on data mining for discrimination discovery and prevention. Reference: · A. Romei, S. Ruggieri. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review. Vol. 29, Issue 5, November 2014, 582-638.

Speaker's bio:

Salvatore Ruggieri is Associate Professor at the Computer Science Department of the University of Pisa, and he is currently the director the Master Programme in Business Informatics. He holds a Ph.D. in Computer Science (1999), whose thesis has been awarded by the Italian Chapter of EATCS as the best Ph.D. thesis in Theoretical Computer Science. He has been the treasurer of the Italian Association for Artificial Intelligence (2003-2007), and the program chair of the XIII Italian Symposium on Artificial Intelligence, Pisa 10-12 December 2014. He was the coordinator of Enforce, a national FIRB (Italian Fund for Basic Research) young researcher project on Computer science and legal methods for enforcing the personal rights of non-discrimination and privacy in ICT systems (ENFORCE, 2010-2014, enforce.di.unipi.it). Salvatore regularly participates in the program committee of top conferences such as KDD, ECML-PKDD, ICDM, and he has been the guest editor of two special issues: Intelligenza Artificiale journal on Artificial Intelligence for Society and Economy, June 2015; and Artificial Intelligence and Law journal on Computational Methods for Enforcing Privacy and Fairness in the Knowledge Society, June 2014. He is a member of the KDD LAB research group, a joint initiative of the University of Pisa and the National Research Council (www-kdd.isti.cnr.it), with research interests focused in the data mining and knowledge discovery area, including: discrimination measurement, segregation discovery, fairness in classification, interplay between privacy and fairness, languages and systems for modelling the process of knowledge discovery; sequential and parallel classification algorithms; frequent itemset mining; web mining and personalization; and applications (CRM, operational risk). Past research topics include program verification and termination methods, constraint programming, quantified linear systems, intelligent multimedia presentation systems, software quality models.



Complexity of the Scheduling Problem for Periodic Real-Time Tasks
Pontus Ekberg | Uppsala University, Sweden

2015-10-20, 13:30 - 13:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In real-time scheduling theory we are interested in finding out whether a set of repeatedly activated computational tasks can be co-executed on a shared computer platform, such that all of their deadlines are met. The periodic and sporadic task models are among the most basic formalisms used for modeling computational tasks. Among computer platforms considered, the preemptive uniprocessor is one of the simplest. To decide whether a given set of periodic or sporadic tasks can be scheduled on a preemptive uniprocessor so that all deadlines are met is therefore a core problem in real-time scheduling theory. Still, the complexity of this decision problem has long been open. In this talk, which is targeted to a general audience, I will outline some recent results pinpointing this complexity.   

Speaker's bio:

Pontus Ekberg is a PhD student at Uppsala University, Sweden. There he has worked on the analysis of algorithms and models for real-time scheduling theory.



Trustworthy File Systems
Christine Rizkallah | NICTA

2015-09-21, 13:00 - 14:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

In this talk, I will present an approach to ease the verification of file-system code using a domain-specific language, currently called CoGent, supported by a self-certifying compiler that produces C code, a high-level specification, and translation correctness proofs.

CoGent is a restricted, polymorphic, higher-order, and purely functional language with linear types and without the need for a trusted runtime or garbage collector. It compiles to efficient C code that is designed to interoperate with existing C functions.

For a well-typed CoGent program, the compiler produces C code, a high-level shallow embedding of its semantics in Isabelle/HOL, and a proof that the C code correctly implements this embedding. The aim is for proof engineers to reason about the full semantics of real-world systems code productively and equationally, while retaining the interoperability and leanness of C.

I will give a high-level overview of the formal verification stages of the compiler, which include automated formal refinement calculi, a switch from imperative update semantics to functional value semantics formally justified by the linear type system, and a number of standard compiler phases such as type checking and monomorphisation. The compiler certificate is a series of language-level meta proofs and per-program translation validation phases, combined into one coherent top-level theorem in Isabelle/HOL.

Speaker's bio:

-



A Quantifier- Elimination Heuristic for Octagonal Constraints
Deepak Kapur | University of New Mexico

2015-09-18, 14:00 - 14:00
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Octagonal constraints expressing weakly relational numerical properties of the form ($l \le \pm x \pm y \leq h$) have been found useful and effective in static analysis of numerical programs. Their analysis serves as a key component of the tool ASTREE based on abstract interpretation framework proposed by Patrick Cousot and his group, which has been successfully used to analyze commercial software consisting of hundreds of thousand of lines of code. This talk will discuss a heuristic based on the quantifier-elimination (QE) approach presented by Kapur (2005) that can be used to automatically derive loop invariants expressed as a conjunction of octagonal constraints in $O(n^2)$, where $n$ is the number of program variables over which these constraints are expressed. This is in contrast to the algorithms developed in Mine's thesis which have the complexity at least $O(n^3)$. The restricted QE heuristic usually generates invariants stronger than those obtained by the freely available Interproc tool. Extensions of the proposed approach to generating disjunctive invariants will be presented.

Speaker's bio:

PhD (1980), Massachusetts Institute of Technology (MIT), Cambridge, MA; M. Tech. (1973) and B. Tech. (1971), Indian Institute of Technology (IIT), Kanpur, India.

Chair, Department of Computer Science, University of New Mexico, Jan. 1999--June 2006.

Adjunct Professor, School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India, 2003 onwards.

Professor, Department of Computer Science, Jan. 1988-- Dec. 1998, University at Albany. Founder and former Director, Institute for Programming and Logics, University at Albany.

Member, Research Staff, Computer Science Branch, General Electric Corporate Research and Development, Schenectady, NY, 1980-87.



''Timing-Aware Control Software Design for Automotive Systems''
Dr. Arne Hamann | Robert Bosch GmbH

2015-06-26, 10:30 - 12:00
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The underlying theories of both control engineering and real-time systems engineering assume idealized system abstractions that mutually neglect central aspects of the other discipline. Control engineering theory, on the one hand, usually assumes jitter free sampling and constant input-output latencies disregarding complex real-world timing effects. Real-time engineering theory, on the other hand, uses abstract performance models that neglect the functional behavior, and derives worst-case situations that have little expressiveness for control functionalities in physically dominated automotive systems. As a consequence, there is a lot of potential for a systematic co-engineering between both disciplines, increasing design efficiency and confidence. In this talk, possible approaches for such a co-engineering and their current applicability to real world problems are discussed. In particular, simulation-based and formal verification techniques are compared for different construction principles of automotive real-time control software.

Speaker's bio:

-



Building an Operating System for the Data Center
Simon Peter | University of Washington

2015-04-01, 10:30 - 10:30
Saarbrücken building E1 5, room 29 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Data centers run a range of important applications with ever increasing performance demands, from cloud and server computing to Big Data and eScience. However, the scaling of CPU frequency has stalled in recent years, leading to hardware architectures that no longer transparently scale software performance. Two trends stand out: 1) Instead of frequency, hardware architectures increase the number of CPU cores, leaving complex memory system performance and CPU scheduling tradeoffs exposed to software. 2) Network and storage I/O performance continues to improve, but without matching improvements in CPU frequency. Software thus faces ever increasing I/O efficiency demands.

In my research, I address how operating systems (OSes) can handle these growing complexity and performance pressures to avoid becoming the limiting factor to performance. I first explain how current OS architecture is already inadequate to address these trends, limiting application performance. I then present how the OS can be redesigned to eliminate the performance limitations without compromising on existing features or application compatibility. I finish with an outlook on how these hardware trends affect software in the future and present ideas to address them.

Speaker's bio:

Simon is a postdoctoral research associate at the University of Washington, where he leads research in operating systems and networks. His postdoctoral advisors are Tom Anderson and Arvind Krishnamurthy. Simon received a Ph.D. in Computer Science from ETH Zurich in 2012 and an MSc in Computer Science from the Carl-von-Ossietzky University Oldenburg, Germany in 2006.

Simon's research focus is on data center performance issues. For his work on the Arrakis high I/O performance operating system, he received the Jay Lepreau best paper award (2014) and the Madrona prize (2014). Previously, Simon has worked on the Barrelfish multicore operating system and conducted further award-winning systems research at various locations, including MSR Silicon Valley, MSR Cambridge, Intel Labs Germany, UC Riverside, and the NSF.



Tracking Resistance with Dissent
David Wolinsky | Yale University

2015-03-30, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

As the most serious cyber-attack threats rapidly shift from untargeted toward increasingly targeted methods, it is becoming more crucial for organizations to protect the identity and location of their members against malicious tracking and surveillance. The common approach by organizations to use encryption is not enough, as metadata, such as the sender and receiver, has recently been shown to be as valuable and hence as dangerous as data. Employing existing anonymous communication tools, such as Tor, only deter but do not prevent tracking and targeting attacks. This talk describes three major challenges to protecting users: network-based traffic analysis attacks, intersection attacks against identities and traffic flows, and application / environment exploits. The talk then introduces practical approaches that address these challenges as implemented and evaluated in the Dissent project.

Speaker's bio:

David Wolinsky is a research scientist and lecturer at Yale University. He joined Yale in the Summer of 2011 after obtaining a PhD at the University of Florida. His research efforts primarily focus on building practical and secure distributed systems. During his PhD, he built a free-to-join compute grid, Grid Appliance (HPDC'11), that combined NSF funded and volunteer resources to produce a 1,000+ node system with 100,000s of compute hours. At Yale, he leads the Dissent project (OSDI'12), a novel group anonymous communication protocol that turned an interesting theoretical idea into a practical system.



Jellyfish: Networking Data Centers, Randomly
Ankit Singla | University of Illinois

2015-03-26, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Abstract: Large Internet services, "big science", and increasingly, industrial R&D, are all powered by warehouse scale computing — tens of thousands of servers connected over a network. Increasing parallelism and big data analytics require that the network provide high throughput connectivity. In this talk, I will describe Jellyfish, our proposed design for such a network. Jellyfish uses a random graph topology, allowing construction at arbitrary sizes, easier network growth over time, and integration of newer and more powerful network equipment as it becomes available — practical problems that rigidly structured traditional networks fail to address. Surprisingly, Jellyfish also beats state-of-the-art real-world designs on throughput, by as much as 40%. In fact, we show that Jellyfish networks achieve near optimal throughput, i.e., one cannot build, using the same networking equipment, *any* network that provides much higher throughput.

Speaker's bio:

Ankit Singla is a PhD Candidate in Computer Science at the University of Illinois at Urbana-Champaign. He received his Bachelors in Technology (Computer Science) at IIT Bombay, India, in 2008. He is a winner of the 2012 Google PhD Fellowship. These days, he is refining a plan for building a speed-of-light Internet, which he loses no opportunity to talk about.



Principled and Practical Web Application Security
Deian Stefan | Stanford University

2015-03-02, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Large-scale private user data theft has become a common occurrence on the web. A huge factor in these privacy breaches we hear so much about is that developers specify and enforce data security policies by strewing checks throughout their application code. Overlooking even a single check can lead to vulnerabilities.

In this talk, I will describe a new approach to protecting sensitive data even when application code is buggy or malicious. The key ideas behind my approach are to separate the security and privacy concerns of an application from its functionality, and to use language-level information flow control (IFC) to enforce policies throughout the code. The main challenge of this approach is at once to design practical systems that can be easily adopted by average developers, and simultaneously to leverage formal semantics that rule out large classes of design error. The talk will cover a server-side web framework (Hails), a language-level IFC system (LIO), and a browser security architecture (COWL), which, together, provide end-to-end security against the privacy leaks that plague today's web applications.

Speaker's bio:

Deian Stefan is a PhD student in Computer Science at Stanford. His research interests intersect systems, programming languages, and security. As part of his PhD work, Deian focused on web application security; he built practical systems with formal underpinnings that enable average developers to build secure web applications. Deian is a recipient of a NDSEG Fellowship and a Mozilla Research Grant for his work on web security. He is a co-founder and the CTO of GitStar Inc., a company that provides security-as-a-service to web developers. He is a member of the W3C Web Application Security Group, where he serves as editor of the COWL spec. He received his BE and ME in Electrical Engineering from Cooper Union.



Minimal Trusted Hardware Assumptions for Privacy-Preserving Systems
Aniket Kate | Saarland University

2015-02-26, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Trusted hardware modules are becoming prevalent in computing devices of all kinds. A broad trusted hardware assumption purports to solve almost all security problems in a trivial and uninteresting manner. However, relying entirely on hardware assumptions to achieve security goals of a system can be impractical given the limited memory, bandwidth and CPU capabilities of available hardware modules, and makes the designed system vulnerable to even a tiny overlooked or undiscovered flaw/side-channel in the employed module. Thus, the key challenge to me while designing a trusted hardware-based system is to determine a minimal hardware assumption required to achieve the system's goals, and justify the assumption for an available hardware module.

In this talk, I will present my recent work on developing privacy-preserving systems based on the above insight. In particular, I will introduce a privacy-preserving transaction protocol for credit networks (PrivPay), an architecture for privacy-preserving online behavioral advertising (ObliviAd), and an asynchronous multiparty computation protocol with only an honest majority (NeqAMPC).

Speaker's bio:

Aniket Kate is a junior faculty member and an independent research group leader at Saarland University in Germany, where he is heading the Cryptographic Systems research group within the Cluster of Excellence. His primary research interests lie at the intersection of cryptography, and systems security research. Along with producing theoretically elegant cryptographic results, he endeavors to make them useful in real-world scenarios. Before joining Saarland University in 2012, Aniket was a postdoctoral researcher at Max Planck Institute for Software Systems (MPI-SWS), Germany. He received his PhD from the University of Waterloo, Canada in 2010, and his masters from Indian Institute of Technology (IIT) - Bombay, India in 2006.



Privacy, Security, and Online Disclosures:Combining HCI and Behavioral Science to Design Visceral Cues for Detection of Online Threats
Laura Brandimarte | Carnegie Mellon University

2015-02-19, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Online privacy and security decision making is complex, because it is affected both by objective risks and benefits from disclosure or protection of personal information, and by factors that do not directly affect economic trade-offs. For instance, design features - such as default visibility settings, look & feel of a website, granularity of privacy controls, or framing of privacy policies - as well as cognitive and behavioral biases affect risk perception. Behavioral sciences provide useful insights on how people respond to risks and threats. Of particular interest to my research is whether individuals detect, recognize, and react differently to "offline" and "online" threats. I will present a series of laboratory experiments, that combine findings from HCI and behavioral sciences, showing how to help users of online sharing technologies detect online privacy and security threats, and thus make better informed decisions. The experiments demonstrate how sensorial, visceral stimuli from the offline, physical world can affect online privacy concern and online disclosures. The results show the importance of going beyond privacy and security usability research, and provide suggestions on how to improve interfaces to help users make sound privacy and security decisions.

Speaker's bio:

Laura Brandimarte is a post-doctoral fellow at Carnegie Mellon University. After undergraduate studies in economics in Rome and a Master of Science at the London School of Economics, she joined CMU to study the behavioral economics of privacy. In December 2012, she obtained her PhD from CMU in Public Policy and Management, with a specialization in Behavioral Science. Her current research interests include privacy decision making, cognitive and behavioral biases in privacy attitudes and choices, soft paternalism and privacy, risk perception and impression formation. Her research is mostly empirical and oriented towards practical policy and human-computer interaction implications.



Programming with Numerical Uncertainties
Dr. Eva Darulova | EPFL, Switzerland

2015-02-17, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Numerical software, common in scientific computing or embedded systems, inevitably uses an approximation of the real arithmetic in which most algorithms are designed. Finite-precision arithmetic, such as fixed-point or floating-point, is a common and efficient choice, but introduces an uncertainty on the computed result that is often very hard to quantify. We need adequate tools to estimate the errors introduced in order to choose suitable approximations which satisfy the accuracy requirements. I will present a new programming model where the scientist writes his or her numerical program in a real-valued specification language with explicit error annotations. It is then the task of our verifying compiler to select a suitable floating-point or fixed-point data type which guarantees the needed accuracy. I will show how a combination of SMT theorem proving, interval and affine arithmetic and function derivatives yields an accurate, sound and automated error estimation which can handle nonlinearity, discontinuities and certain classes of loops. We have further combined our error computation with genetic programming to not only verify but also improve accuracy. Finally, together with techniques from validated numerics we developed a runtime technique to certify solutions of nonlinear systems of equations, quantifying truncation in addition to roundoff errors.

Speaker's bio:

Eva Darulova is a postdoc at EPFL in the Laboratory for Automated Reasoning and Analysis. Her research interests include programming languages, software verification and in particular approximate computing. Her recent research focused on automated verification and synthesis for numerical programs where she developed techniques and tools for explicit handling of uncertainties. She received a PhD from EPFL in 2014 and a BS degree from University College Dublin in 2009 with a joint major in computer science and mathematical physics.



Trace Complexity of Information Diffusion
Alessandro Panconesi | Informatica - Sapienza, Università di Roma

2015-01-26, 10:30 - 11:30
Kaiserslautern building G26, room 112 / simultaneous videocast to Saarbrücken building E1 5, room 002

Abstract:

Abstract: In recent years, we have witnessed the emergence of many sophisticated web services that allow people to interact on an unprecedented scale. The wealth of data produced by these new ways of communication can be used, in principle, to increase our understanding of human social behaviour, but a fundamental hurdle is posed by the sensitivity of these data. Access must of necessity be severely constrained in order to protect the privacy of the users and the confidentiality of the data. A very broad question arises naturally: can non-trivial conclusions about various social processes be inferred based only on such limited information? We give a few specific examples taken from our own research of what can, and cannot, be learned from digital traces. The talk describes joint work with several people: B.Abrahao, P.Brach, F.Chierichetti, R.Kleinberg, A.Epasto, P.Sankowski

Speaker's bio:

Alessandro Panconesi is full professor of Computer Science at Sapienza, University of Rome. He holds a PhD degree in Computer Science from Cornell University. He was the recipient of the ACM Danny Lewin Award. Last year he was awarded a Google Focused Award and, previously, he received research faculty awards from IBM, Yahoo and Google. His main current research interest are distributed and randomised algorithms for social networks.



Dynamic Graph Algorithms - Upper and Lower Bounds
Monika Henzinger | University of Vienna

2015-01-22, 13:15 - 14:15
Saarbrücken building E1 4, room 024 / simultaneous videocast to building , room

Abstract:

Dynamic graph algorithms are data structures that maintain properties of graphs while they are modified by edge deletions and insertions. These data structures can be used, for example, to efficiently detect deadlocks in operating systems, compute shortest-paths in navigation systems, or to speed up static algorithms, such as algorithms used in computer-aided verification or certain multi-commodity flow algorithms, that solve multiple graph propblems on very similar graphs.

While basic properties, such as connectivity, in undirected graphs can be maintained in time polylogarithmic in the size of the graph per edge update, no such bounds are known for basic properties, such as reachability, in directed graphs or more sophisticated graph properties, such as shortest paths, in undirected graphs. We will present the state of the art for these problems, giving both upper and (conditional) lower bounds. Conditional lower bounds result from a relatively new research direction, where similar to hardness reductions in complexity theory, we show that based on the assumption that some well-known problems, such as Boolean matrix multiplication, cannot be solved fast, certain dynamic problems cannot be solved in polylogarithmic time per operation.

Speaker's bio:

-



Putting Threads on a Solid Foundation: Some Remaining Issus
Hans Boehm | Google, Palo Alto

2015-01-09, 15:00 - 16:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Shared memory parallel machines have existed since the 1960s and, with the proliferation of multi-core processors, shared memory parallel programming has become increasingly important. Unfortunately, some fairly basic questions about parallel program semantics have not been addressed until quite recently. Multi-threaded programming languages are on much more solid ground than they were as recently as a decade ago. Nonetheless a number of foundational issues have turned out to be surprisingly subtle and resistant to easy solutions. We briefly survey three such issues:

1) All major programming languages support atomic operations that enforce only very weak memory ordering properties. These implicitly rely on enforcement of dependencies to preclude certain "nonsensical" outcomes. Unfortunately, we do not know how to define a suitable notion of dependency.

2) Garbage collected programming languages such as Java, rely on finalization, most commonly to interact correctly with non-garbage-collected parts of the system. But finalization raises largely unsolved concurrency issues. As a result, almost all interesting uses of finalization uses are subtly, but seriously, incorrect.

3) It can be difficult to avoid object destruction or memory deallocation races in-garbage-collected languages like C++. As a result, async(), the basic C++11 thread spawning facility has surprisingly complex semantics, with unintended consequences.

Speaker's bio:

-



Analyzing Dynamics of Choice among Discrete Alternatives
Andrew Tomkins | Google

2015-01-09, 10:00 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

In this talk we'll consider two problems in which users must select from a set of alternatives. In the first scenario, a user consumes a class of item repeatedly, such as listening to a sequence of songs, or visiting a sequence of restaurants over time. Our goal is to understand the dynamics of repeated consumption of the same item. We present a model related to Herbert Simon's 1955 copying model, and analyze its effectiveness. In the second scenario, a user traverses a directed graph whose nodes represent items, and whose arcs represent related items recommended by the system. In this setting, we develop a model and algorithm for determining the underlying quality of each node based on traversal data. Our result provides a well-motivated unique solution to the problem of "reverse engineering" a markov chain by finding a transition matrix given the graph and the steady state

Speaker's bio:

Andrew Tomkins is an engineering director at Google working on web analysis, search, and personalization. His research has focused on measurement, modelling, and analysis of content, communities, and users on the World Wide Web. Prior to joining Google, he spent four years at Yahoo!, leading search research and serving as chief scientist of the search organization. He also spent eight years at IBM's Almaden Research Center, where he co-founded the WebFountain project and served as its chief scientist. He has published over 100 technical papers and submitted over sixty patents. Andrew received Bachelors degrees in Mathematics and Computer Science from MIT, and a PhD in CS from Carnegie Mellon University. He recently co-chaired the program of WWW2008 and KDD2010, and serves on the editorial board of ACM Transactions on the Web



Formalising and Optimising Parallel Snapshot Isolation
Alexey Gotsman | IMDEA

2014-12-19, 13:00 - 13:00
Kaiserslautern building G26, room 111

Abstract:

Modern Internet services often achieve dependability by relying on geo-replicated databases that provide consistency models for transactions weaker than serialisability. We investigate a promising consistency model of Sovran et al.'s parallel snapshot isolation (PSI), which weakens the classical snapshot isolation in a way that allows more efficient geo-replicated implementations. We first give a declarative specification to PSI that does not refer to implementation-level concepts and, thus, allows application programmers to reason about the behaviour of PSI databases more easily. We justify our high-level specification by proving its equivalence to the existing low-level one. Using our specification, we develop a criterion for checking when a set of transactions executing on PSI can be chopped into smaller pieces without introducing new behaviours. This allows application programmers to optimise the code of their transactions to execute them more efficiently. We find that our criterion is more permissive than the existing one for chopping serialisable transactions. These results contribute to understanding the complex design space of consistency models for geo-replicated databases.This is joint work with Andrea Cerone (IMDEA) and Hongseok Yang (Oxford).

Speaker's bio:

-



CoLoSL: Concurrent Local Subjective Logic
Azalea Raad | Imperial College London

2014-12-15, 14:00 - 15:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

A key difficulty in verifying shared-memory concurrent programs is reasoning compositionally about each thread in isolation. When the footprints of threads overlap with each other, existing program logics require reasoning about static global shared resource which impedes compositionality. We introduce the program logic of CoLoSL, where each thread is verified with respect to its subjective view of the global shared state. This subjective view describes only that part of the global shared resource accessed by the thread. Subjective views may arbitrarily overlap with each other, and expand and contract depending on the resource required by the thread, thus allowing for truly compositional proofs for shared-memory concurrency.

Speaker's bio:

-



Scaling TCP performance for multicore systems
KyoungSoo Park | KAIST

2014-12-09, 11:30 - 12:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

Scaling the performance of short TCP connections on multicore systems is fundamentally challenging. Despite many proposals that have attempted to address various shortcomings, inefficiency of the kernel implementation still persists. For example, even state-of-the-art designs spend 70% to 80% of CPU cycles in handling TCP connections in the kernel, leaving only small room for innovation in the user-level program.  In this talk, I will present mTCP, a high-performance user-level TCP stack for multicore systems. mTCP addresses the inefficiencies from the ground up - from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design (1) translates multiple expensive system calls into a single shared memory reference, (2) allows efficient flow-level event aggregation, and (3) performs batched packet I/O for high I/O efficiency. Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack and a factor of 3 compared to the MegaPipe system. It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack.

Speaker's bio:

KyoungSoo Park is an associate professor in the Electrical Engineering department at KAIST. His research interests focus on the reliability, performance, and security issues in the design and implementation of networked computing systems. He has developed CoBlitz, a scalable large-file content distribution network (CDN), which is acquired by Verivue, Inc., and subsequently by Akamai, Inc. He has co-developed HashCache, a memory-efficient caching storage system for developing regions, which was chosen one of the top 10 technologies in 2009 by the MIT technology review magazine. Most recently, his mTCP paper received the community award at USENIX NSDI 2014. He received his B.S. degree from Seoul National University in 1997, and his M.A. and Ph.D. degrees from Princeton University in 2004 and 2007, respectively, all in computer science. Before joining KAIST, he worked as assistant professor in the computer science department at University of Pittsburgh in the year of 2009.



Authenticated Data Structures, Generically
Michael W. Hicks | University of Maryland

2014-12-05, 13:30 - 15:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 207

Abstract:

An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations.

This paper presents a generic method, using a simple extension to a ML-like functional programming language we call (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of we prove that all well-typed programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.

Joint work with Andrew Miller, Jonathan Katz, and Elaine Shi, at UMD

Speaker's bio:

Michael W. Hicks is a Professor in the Computer Science department and UMIACS at the University of Maryland and is the former Director of the Maryland Cybersecurity Center (MC2). His research focuses on using programming languages and analyses to improve the security, reliability, and availability of software. He is perhaps best known for his work exploring dynamic software updating, which is a technique by which software can be updated without shutting it down. He has explored the design of new programming languages and analysis tools for helping programmers find bugs and software vulnerabilities, and for identifying suspicious or incorrect program executions. He has recently been exploring new approaches to authenticated and privacy-preserving computation, combining techniques from cryptography and automated program analysis.



Exploiting Social Network Structure for Person-to-Person Sentiment Analysis
Robert West | Stanford University

2014-11-26, 11:00 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Person-to-person evaluations are prevalent in all kinds of discourse and important for establishing reputations, building social bonds, and shaping public opinion. Such evaluations can be analyzed separately using signed social networks and textual sentiment analysis, but this misses the rich interactions between language and social context. To capture such interactions, we develop a model that predicts individual A's opinion of individual B by synthesizing information from the signed social network in which A and B are embedded with sentiment analysis of the evaluative texts relating A to B. We prove that this problem is NP-hard but can be relaxed to an efficiently solvable hinge-loss Markov random field, and we show that this implementation outperforms text-only and network-only versions in two very different datasets involving community-level decision-making: the Convote U.S. Congressional speech corpus and the Wikipedia Requests for Adminship corpus. (Joint work with Hristo Paskov, Jure Leskovec, and Christopher Potts)

Time permitting, I will also briefly discuss the "From Cookies to Cooks" project, where we leverage search-engine query logs to gain insights into what foods people consume when and where. (Joint work with Ryen White and Eric Horvitz)

Speaker's bio:

Bob obtained a Diplom degree in computer science from the Technical University of Munich in his native Germany in 2007 and a Master's degree in computer science from McGill University in 2010. He is currently a fifth-year Ph.D. candidate in the InfoLab at Stanford University, advised by Jure Leskovec, where he has been working at the intersection of data mining, machine learning, and natural language processing to convert raw log data into meaningful insights on a number of human behaviors, ranging from navigation in complex networks to Wikipedia editing to food intake.



Randomized Solutions to Renaming under Crashes and Byzantine Faults
Oksana Denysyuk | INESC-ID and University of Lisbon

2014-11-17, 14:00 - 15:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Exploring the power and limitations of different coordination problems has always been at the heart of the theory of distributed computing. This talk addresses a coordination problem called renaming. Renaming can be seen as a dual to the classical consensus problem: instead of agreeing on a unique value, in renaming correct processes must disagree by picking distinct values from an appropriate range of values. The talk consists of two parts, each considering a different fault model in synchronous message-passing systems. In the first part, we tackle crash faults and propose a new randomization technique, called balls-into-leaves, which solves renaming in sub-logarithmic number of rounds. This technique outperforms optimal deterministic algorithms by an exponential factor. In the second part, we consider the more challenging Byzantine faults. We propose a randomized renaming algorithm that tolerates up to t

Speaker's bio:

Oksana Denysyuk received an M.S. and B.S. in Computer Science from the Instituto Superior Tecnico of the University of Lisbon in 2009 and 2007, respectively. She is currently a Ph.D. student in the same organization and expects to obtain her doctorate degree in November 2014. Her work was published at the PODC, DISC, and ICDCS conferences. Her research interests include theory of distributed computing, dynamic networks, fault tolerance, and randomized distributed algorithms.



Yesquel: scalable SQL storage for Web applications
Marcos K. Aguilera | unaffiliated

2014-11-17, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Web applications (web mail, web stores, social networks, etc) keep massive amounts of data that must be readily available to users.  The storage system underlying these applications has evolved dramatically over the past 25 years, from file systems, to SQL database systems, to a large variety of NOSQL systems. In this talk, we contemplate this fascinating history and present a new storage system called Yesquel. Yesquel combines several advantages of prior systems. It supports the SQL query language to facilitate the design of Web applications, while offering performance and scalability competitive with a widely used NOSQL system.

Speaker's bio:

Dr. Aguilera received a Ph.D. in Computer Science from Cornell University in 2000. His work spans both the theoretical foundations of distributed computing and the practical applications of distributed systems. He has worked as a researcher at Compaq's Systems Research Center, HP Labs, and Microsoft Research Silicon Valley. He chaired OPODIS 2014, DISC 2012, ICDCN 2011, and LADIS 2010. He currently serves on the editorial board of ACM Transactions on Computing Systems and on the program committees of SOSP 2015 and HotOS 2015.



Generalized Universality
Rachid Guerraoui | EPFL

2014-11-12, 10:30 - 12:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 002

Abstract:

Replicated state machine is a fundamental computing construct for it essentially makes a distributed system emulate a highly available, centralized one, using a consensus abstraction through which processes agree on common decisions. The idea is at the heart of the fault-tolerance of most data centers today. Any sequential object is modeled by a state machine that can be replicated over all processes of the system and accessed in a wait-free manner: we talk about the universality of the construct and of its underlying consensus abstraction. Yet, consensus is just a special case of a more general abstraction, k-set consensus, where processes agree on at most k different decisions. It is natural to ask whether there exists a generalization of state machine replication with k-set agreement, for otherwise distributed computing would not deserve the aura of having an underpinning Theory as 1 (k-set consensus with k=1) would be special. The talk will recall the classical state machine replication construct and show how, using k-set consensus as an underlying abstraction, the construct can be generalized to implement k state machines of which at least one makes progress, generalizing in a precise sense the very notion of consensus universality. The work is a joint work with Eli Gafni 

Speaker's bio:

Rachid Guerraoui is professor of Computer Science at the Swiss Federal Institute of Technology in Lausanne where he leads the Distributed Programming Laboratory. Rachid is fellow of the ACM and has recently been awarded an advanced ERC grant and a Google focused award. He has also been affiliated in the past with the Research Center of Ecole des Mines de Paris, the Commissariat a l'Energie Atomique in Saclay,  Hewlett-Packard Laboratories in Palo Alto and the Massachusetts Institute of Technology.



Scalable personalization infrastructures
Anne-Marie Kermarrec | Inria

2014-11-11, 11:30 - 13:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

The ever-growing amount of data available on the Internet clearly calls for personalization. Yet, the most effective personalization schemes, such as those based on collaborative filtering (CF), are notoriously resource greedy. We argue that scalable infrastructures relying on  P2P design can scale to that increasing number of users, data and dynamics. We will present a novel scalable k-nearest neighbor protocol, which P2P flavor provides scalability by design. This protocol provides each user with an implicit social network composed of users with similar tastes in a given application. This protocol has been instanciated in various infrastructures on several applications (recommendation, top-k, serach, etc.) (1) A P2P system, WhatsUp, a collaborative filtering system for disseminating news items in a large-scale dynamic setting with no central authority; (2) A hybrid recommendation infrastructure HyRec, an online cost-effective scalable system for CF personalization, offloading CPU-intensive recommendation tasks to front-end client browsers, while retaining storage and orchestration tasks within back-end servers; (3) A cloud-based centralized  engine providing real time recommendations.

Speaker's bio:

Anne-Marie Kermarrec is a research director at Inria where she leads the ASAP (as Scalable As Possible) research group on large-scale dynamic distributed systems. Before that she has been with Vrije Universteit and  Microsoft Research Cambridge. She was the PI  of an ERC Strating Grant (2008-2013) and an ERC Proof of Concept Grant (2013). 



Network Complexity & Complex Networks
Roger Wattenhofer | ETH Zurich

2014-11-10, 11:15 - 11:15
Saarbrücken building E 1.5, room 002

Abstract:

What can be computed, and how efficiently, is a core question in computer science. Not surprisingly, in distributed systems and networking research, a fundamental question is what can be computed how efficiently in a distributed fashion, in a network. More precisely, if nodes of a network must solve a problem by locally communicating with their neighbors, how fast can they compute (or approximate) a global (optimization) problem? Throughout the years, we studied different aspects and problems of this "network complexity" theory. In my talk I will discuss a few facets, e.g., how to compute the network diameter, or the role of randomness. Towards the end of the talk I will present some nuggets regarding "complex networks", in particular updates in Software Defined Networks, and attacks on the Bitcoin network.

Speaker's bio:

Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzerland. He received his doctorate in Computer Science in 1998 from ETH Zurich. From 1999 to 2001 he was in the USA, first at Brown University in Providence, RI, then at Microsoft Research in Redmond, WA. He then returned to ETH Zurich, originally as an assistant professor at the Computer Science Department.

Roger Wattenhofer's research interests are a variety of algorithmic and systems aspects in computer science and information technology, currently in particular wireless networks, wide area networks, mobile systems, social networks, and physical algorithms. He publishes in different communities: distributed computing (e.g., PODC, SPAA, DISC), networking (e.g., SIGCOMM, MobiCom, SenSys), or theory (e.g., STOC, FOCS, SODA, ICALP).



Using Twitter to study Food Consumption and Fitness Behavior
Ingmar Weber | MPI-INF - D1

2014-11-05, 10:30 - 11:30
Saarbrücken building E1 5, room 029

Abstract:

This talk presents two ongoing lines of work looking at how Twitter can be used to track societal level health issues. You Tweet What You Eat: Studying Food Consumption Through Twitter; joint work with Yelena Mejova and Sofiane Abbar Food is an integral part of our lives, cultures, and well-being, and is of major interest to public health. The collection of daily nutritional data involves keeping detailed diaries or periodic surveys and is limited in scope and reach. Alternatively, social media is infamous for allowing its users to update the world on the minutiae of their daily lives, including their eating habits. In this work we examine the potential of Twitter to provide insight into US-wide dietary choices by linking the tweeted dining experiences of 210K users to their interests, demographics, and social networks. We validate our approach by relating the caloric values of the foods mentioned in the tweets to the state-wide obesity rates, achieving a Pearson correlation of 0.77 across the 50 US states and the District of Columbia. We then build a model to predict county-wide obesity and diabetes statistics based on a combination of demographic variables and food names mentioned on Twitter. Our results show significant improvement over previous research. We further link this data to societal and economic factors, such as education and income, illustrating that, for example, areas with higher education levels tweet about food that is significantly less caloric. Finally, we address the issue of the social nature of obesity (first raised by Christakis & Fowler) by inducing two social networks using mentions and reciprocal following relationships. From Fitness Junkies to One-time Users: Determining Successful Adoptions of Fitness Applications on Twitter; joint work with Kunwoo Park and Meeyoung Cha As our world becomes more digitized and interconnected, one's health status---a topic that was once thought to be private---is shared on public platforms. This trend is facilitated by scores of fitness applications that push health updates to users' social networks. This paper presents the behavioral patterns of social opt-in users of a popular fitness application, MyFitnessPal. Through data gathered from Twitter, we determine whether any features such as the profile, fitness activities, and social support can predict long-term retention and weight loss of users. We discuss implications of findings related to HCI and the design of health applications.

Speaker's bio:

At QCRI, Ingmar works on social media mining, web science and computational political science. Common to most of his research is the use of online data to understand offline behaviour. Before joining QCRI, Ingmar was a research scientist at Yahoo! Research in Barcelona working in the area of web mining, analyzing large sets of user log data. Previously, he was a postdoc at EPFL working on sponsored search auctions, tag recommendation and other things. In the summer of 2008 he visited Microsoft Research Cambridge and, before starting his PhD, interned with companies in the area of public key cryptography and the Fraunhofer Institute for Industrial Mathematics. He did his PhD at the Max-Planck Institute for Computer Science and holds BA and MA degrees in mathematics from Cambridge University.



Machine Learning about People from their Language
Noah Smith | Carnegie Mellon University

2014-11-04, 15:30 - 16:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

This talk describes new analysis algorithms for text data aimed at understanding the social world from which the data emerged. The political world offers some excellent questions to explore: Do US presidential candidates "move to the political center" after winning a primary election? Are Supreme Court justices swayed by amicus curiae briefs, documents crafted at great expense? I'll show how our computational models capture theoretical commitments and uncertainty, offering new tools for exploring these kinds of questions and more. Time permitting, we'll close with an analysis of a quarter million biographies, discussing what can be discovered about human lives as well as those who write about them.

The primary collaborators on this research are my Ph.D. students David Bamman and Yanchuan Sim; collaborators from the Political Science Department at UNC Chapel Hill, Brice Acree, and Justin Gross; and Bryan Routledge from the Tepper School of Business at CMU.

Speaker's bio:

Noah Smith designs algorithms for automated analysis of human language. He often exploits the web to this end, including mining the web for translations (Resnik and Smith, 2003), measuring public opinion from social messages (O'Connor et al., 2010), and inferring geographic linguistic variation (Eisenstein et al., 2010).

Smith has also contributed algorithms tackling the core problems of natural language processing: parsing sentences into syntactic representations (Eisner et al., 2005; Martins et al., 2009) and semantic representations (Das et al., 2010; Flanigan et al., 2014), as well as cross-cutting techniques for unsupervised language learning (Smith and Eisner, 2005; Cohen and Smith, 2009). His 2011 book, Linguistic Structure Prediction, synthesizes many statistical modeling techniques for language.

Such methods advance applications for automatic translation (Al-Onaizan et al., 1999; Gimpel and Smith, 2011), empirical work in the social sciences (Kogan et al., 2009; Yano et al., 2009, Sim et al., 2013) and humanities (Bamman et al., 2014), and education (Heilman and Smith, 2010), and other next-generation language technologies.

Smith is Associate Professor of Language Technologies and Machine Learning in the School of Computer Science at Carnegie Mellon University. In fall 2015, he will join the University of Washington as Associate Professor of Computer Science & Engineering. Prior to coming to CMU, he was a Hertz Foundation Fellow at Johns Hopkins University, where he completed his Ph.D. in 2006. He is a clarinetist, tanguero, and swimmer.



MultiSE: Multi-Path Symbolic Execution using Value Summaries
Koushik Sen | UC Berkeley

2014-10-30, 16:00 - 17:00
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 002

Abstract:

Dynamic symbolic execution (DSE) has been proposed to effectively generate test inputs for real-world programs. Unfortunately, DSE techniques do not scale well for large realistic programs, because often the number of feasible execution paths of a program increases exponentially with the increase in the length of an execution path.

In this paper, we propose MultiSE, a new technique for merging states incrementally during symbolic execution, without using auxiliary variables. The key idea of MultiSE is based on an alternative representation of the state, where we map each variable, including the program counter, to a set of guarded symbolic expressions called a value summary. MultiSE has several advantages over conventional DSE and existing state merging techniques: value summaries enable sharing of symbolic expressions and path constraints along multiple paths and thus avoid redundant execution. MultiSE does not introduce auxiliary symbolic variables, which enables it to 1) make progress even when merging values not supported by the constraint solver, 2) avoid expensive constraint solver calls when resolving function calls and jumps, and 3) carry out most operations concretely. Moreover, MultiSE updates value summaries incrementally at every assignment instructions, which makes it unnecessary to identify the join points and to keep track of variables to merge at join points.

We have implemented MultiSE for JavaScript programs in a publicly available open-source tool. Our evaluation of MultiSE on several programs shows that MultiSE can run significantly faster than traditional dynamic symbolic execution and save substantial number of state merges compared to existing state merging techniques.

Speaker's bio:

-



Vellvm: Verifying Safety in the LLVM IR
Steve Zdancewic | University of Pennsylvania

2014-10-09, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

The Low-Level Virtual Machine (LLVM) compiler provides a modern, industrial-strength SSA-based intermediate representation (IR) along with infrastructure support for many source languages and target platforms. Much of the LLVM compiler is structured as IR to IR translation passes that apply various optimizations and analyses, making it an ideal target for enforcing security properties of low-level code.

In this talk, I will describe the Vellvm project, which seeks to provide a formal framework for developing machine-checkable proofs about LLVM IR programs and translation passes. I'll discuss some of the subtleties of modeling the LLVM IR semantics, including nondeterminism and its use of SSA representation. I'll also describe some of the proof techniques that we have used for reasoning about LLVM IR transformations and describe our results about the formal verification of the SoftBound pass, which hardens C programs against memory safety errors.

Vellvm, which is implemented in the Coq theorem prover, provides facilities for extracting LLVM IR transformation passes and plugging them into the LLVM compiler, thus enabling us to create verified optimization passes for LLVM and evaluate them against their unverified counterparts. Our experimental results show that fully verified and automatically extracted implementations can yield competitive performance.

This is joint work with Jianzhou Zhao and Milo Martin (both at Penn) and Santosh Nagarakatte (at Rutgers University).

Speaker's bio:

-



Termination of Linear Programs: Advances and Challenges
Dr. Joel Ouaknine | University of Oxford

2014-10-02, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In the quest for program analysis and verification, program termination -- determining whether a given program will always halt or could execute forever -- has emerged as a central component. Although proven undecidable in the general case by Alan Turing over 80 years ago, positive results have been obtained in a variety of restricted instances. We survey the situation with a focus on simple linear programs, i.e., WHILE loops in which all assignments and guards are linear, discussing recent progress as well as ongoing and emerging challenges.

Speaker's bio:

-



Leveraging Sharding in the Design of Scalable Replication Protocols
Robbert van Renesse | Cornell University

2014-08-08, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Most if not all datacenter services use sharding and replication for scalability and reliability. Shards are more-or-less independent of one another and individually replicated. We challenge this design philosophy and present a replication protocol where the shards interact with one another: A protocol running within shards ensures linearizable consistency, while the shards interact in order to improve availability. We analyze its availability properties and evaluate a working implementation.

Speaker's bio:

Robbert van Renesse obtained his Ph.D. in 1989 at the Vrije Universiteit in Amsterdam, and subsequently worked on the Plan 9 operating system at AT&T Bell Labs. Since 1991 he and his students have been working at Cornell University on the theory and practice of scalable distributed systems. Van Renesse co-founded two companies in distributed systems technology.



A Fast, Correct Time-Stamped Stack
Mike Dodds | University of York, UK

2014-08-06, 15:30 - 16:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Concurrent data-structures, such as stacks, queues, and deques, often implicitly enforce a total order over elements in their underlying memory layout. However, much of this order is unnecessary: linearizability only requires that elements are ordered if the insert methods ran in sequence. We propose a new approach which uses timestamping to avoid unnecessary ordering. Pairs of elements can be left unordered (represented by unordered timestamps) if their associated insert operations ran concurrently, and order imposed as necessary by the eventual remove operations.

We realise our approach in a new non-blocking data-structure, the TS (timestamped) stack. In experiments on x86, the TS stack outperforms and outscales all its competitors -- for example, it outperforms the elimination-backoff stack by factor of two. In our approach, more concurrency translates into less ordering, giving less-contended removal and thus higher performance and scalability. Despite this, the TS stack is linearizable with respect to stack semantics.

The weak internal ordering in the TS stack presents a challenge when establishing linearizability: standard techniques such as linearization points work well when there exists a total internal order. We have developed a new stack theorem, mechanised in Isabelle, which characterises the orderings sufficient to establish stack semantics. By applying our stack theorem, we can show that the TS stack is indeed correct.

Speaker's bio:

-



Trace Complexity
Flavio Chierichetti | Sapienza University of Rome

2014-07-30, 10:30 - 11:30
Saarbrücken building E1 5, room 029

Abstract:

Prediction tasks in machine learning usually require deducing a latent variable, or structure, from observed traces of activity — sometimes, these tasks can be carried out with a significant precision, while sometimes getting any significance out of the prediction requires an unrealistically large number of traces.

In this talk, we will study the trace complexity of (that is: the number of traces needed for) a number of prediction tasks in social networks: the network inference problem, the number of signers problem, and the star rating problem.

The first problem was defined by [Gomez-Rodriguez et al, 2010] and consists of reconstructing the edge set of a network given traces representing the chronology of infection times as epidemics spread through the network. The second problem’s goal is to predict the unknown number of signers of email-based petitions, given only a small subset of the emails that circulated. The last problem aims to predict the unknown absolute "quality" of a movie using the ratings given by different users (each with their own unknown precision)

These examples will allow us to highlight some interesting general points of prediction tasks.

Joint work with subsets of Bruno Abrahao, Anirban Dasgupta, Bobby Kleinberg, Jon Kleinberg, Ravi Kumar, Silvio Lattanzi and David Liben-Nowell.

Speaker's bio:

Flavio Chierichetti is an assistant professor in the Department of Computer Science at Sapienza University of Rome.



Tracking information flow in web applications
Andrei Sabelfeld | Chalmers University

2014-07-24, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

This talk discusses a principled approach to web application security through tracking information flow in web applications. Although the agile nature of developments in web application technology makes web application security much of a moving target, we show that there are some fundamental challenges and tradeoffs that determine possibilities and limitations of automatically securing web applications. We address challenges related to mutual distrust on the policy side (as in web mashups) and tracking information flow in dynamic web programming languages (such as JavaScript) to provide a foundation for practical web application security.

Speaker's bio:

Andrei Sabelfeld is a Professor in the Department of Computer Science and Engineering at Chalmers University of Technology in Gothenburg, Sweden. After receiving his Ph.D. in Computer Science from Chalmers in 2001 and before joining Chalmers as faculty in 2004, he was a Research Associate at Cornell University in Ithaca, NY. His research has developed the link between two areas of Computer Science: Programming Languages and Computer Security. Sabelfeld's article on Language-Based Information-Flow Security is one of the most cited articles in all of Computer Science from 2003.



Compositional Verification of Termination-Preserving Refinement of Concurrent Programs
Hongjin Liang | UST China

2014-07-23, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Many verification problems can be reduced to refinement verification. However, existing work on verifying refinement of concurrent programs either fails to prove the preservation of termination, allowing a diverging program to trivially refine any programs, or is difficult to apply in compositional thread-local reasoning. In this talk, I will present a Hoare-style concurrent program logic supporting termination-preserving refinement proofs. We show two key applications of our logic, i.e., verifying linearizability and lock-freedom together for fine-grained concurrent objects, and verifying full correctness of optimizations of concurrent algorithms.

Speaker's bio:

Hongjin and her advisor have been been working on program logics for proving refinement of concurrent programs and have been publishing at top places: POPL, PLDI, LICS, CONCUR.



Adventures in Program Synthesis
Ras Bodik | UC Berkeley

2014-05-22, 10:30 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

If you possessed the power to synthesize any program from just a high-level description of the desired behavior, how would you deploy this power? It seems that you could end programming as we know it and automate the creation of everything that can be viewed as a program, including biological and economic models. This dream is not fully achievable but over the last decade we have learnt how to rephrase the synthesis problem to make it solvable and practically applicable. This talk will mine seven projects from my group for surprising lessons and future opportunities. I will first present two fresh answers to the question of "where do specifications come from?" and then argue for synergy between domain-specific languages and synthesis. I will also explain why synthesis may help rethink compiler construction, necessitated by the era of unusual hardware. Looking into the next decade, I will illustrate how synthesis may facilitate computational doing --- the development of tools for data science and the digital life.

Speaker's bio:

Ras Bodik is a Professor of Computer Science at UC Berkeley. He works on a range of techniques for program synthesis, from programming by demonstration, to sketching, and solver-aided languages. His group has applied synthesis to high-performance computing, web browser construction, algorithm design, document layout, and biology. He has designed a course on programming languages where student learn hands-on small-language design by constructing a modern web browser.



Modeling and representing materials in the wild
Kavita Bala | Cornell University

2014-05-08, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

Our everyday life brings us in contact with a rich range of materials that contribute to both the utility and aesthetics of our environment. Human beings are very good at using subtle distinctions in appearance to distinguish between materials (e.g., silk vs. cotton, laminate vs. granite). In my group we are working on understanding how we perceive materials to drive better graphics and vision algorithms.

In this talk I will present OpenSurfaces, a rich, labeled database consisting of thousands of examples of surfaces segmented from consumer photographs of interiors, and annotated with material parameters, texture information, and contextual information. We demonstrate the use of this database in applications like surface retexturing, intrinsic image decomposition, intelligent material-based image browsing, and material design. I will also briefly describe our work on light scattering for translucent materials and realistic micron-resolution models for fabrics. Our work has applications in many domains: in virtual and augmented reality fueled by the advent of devices like Google Glass, in virtual prototyping for industrial design, in ecommerce and retail, in textile design and prototyping, in interior design and remodeling, and in games and movies.

Speaker's bio:

Kavita Bala is an Associate Professor in the Computer Science Department at Cornell University. She received her PhD from the Massachusetts Institute of Technology (MIT). Bala leads research projects in physically-based scalable rendering, perceptually-based graphics, material perception and acquisition, and image-based modeling and texturing. Her group's scalable rendering research on Lightcuts is the core rendering technology in Autodesk's cloud rendering platform. Bala's professional activities include Chair of SIGGRAPH Asia 2011, co-Chair Pacific Graphics (2010) and the Eurographics Symposium on Rendering (2005), Papers Advisory Board for SIGGRAPH and SIGGRAPH Asia, Senior Associate Editor for TOG, and Associate Editor for TVCG and CGF. She has received the NSF CAREER award, and Cornell's College of Engineering James and Mary Tien Excellence in Teaching Award (2006 and 2009)



Lazy Bit-vector Solving and Witnessing Compiler Transformations
Liana Hadarean | NYU

2014-05-05, 14:00 - 15:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The Satisfiability Modulo Theories (SMT) problem is a generalization of the Boolean satisfiability problem to first-order logic. SMT solvers are now at the forefront of automated reasoning and they are increasingly used in a number of diverse applications. In this talk I will first present a new decision procedure for the theory of fixed-width bit-vectors, and then describe an application of SMT techniques to verifying compiler transformations.

Most SMT solvers decide bit-vector constraints via eager reduction to propositional logic after first applying powerful word-level rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. We present a lazy solver that targets such problems by maintaining the word-level structure during search. This approach also enables efficient combination with the theory of arrays using variants of the Nelson-Oppen combination procedure.

The combination of the array and bit-vector theories offers a natural way of encoding the semantics of compiler intermediate representation languages. We show how to integrate an SMT solver with a compiler to build a "self-certifying" compiler: a compiler that generates a verifiable justification for its own correctness on every run. Our compiler produces as justification a refinement relation between the source and target programs of every optimization step. This "witness" relation is produced by an auxiliary witness generator, and is untrusted: its correctness is checked by an external SMT solver. Our implementation is based on the LLVM compiler: we have written generators for a number of intra-procedural optimizations. Preliminary results suggest the overhead of witness generation and checking compilation is manageable.

Speaker's bio:

-



Modular Reasoning about Heap Paths via Effectively Propositional Formulas
Ori Lahav | Tel Aviv University

2014-04-24, 10:30 - 11:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

First order logic with transitive closure, and separation logic enable elegant interactive verification of heap-manipulating programs. However, undecidabilty results and high asymptotic complexity of checking validity preclude complete automatic verification of such programs, even when loop invariants and procedure contracts are specified as formulas in these logics. We tackle the problem of procedure-modular verification of reachability properties of heap-manipulating programs using efficient decision procedures that are complete: that is, a SAT solver must generate a counterexample whenever a program does not satisfy its specification. By (a) requiring each procedure modifies a fixed set of heap partitions and creates a bounded amount of heap sharing, and (b) restricting program contracts and loop invariants to use only deterministic paths in the heap, we show that heap reachability updates can be described in a simple manner. The restrictions force program specifications and verification conditions to lie within a fragment of first-order logic with transitive closure that is reducible to effectively propositional logic, and hence facilitate sound, complete and efficient verification. We implemented a tool atop Z3 and report on preliminary experiments that establish the correctness of several programs that manipulate linked data structures.

Presented in POPL'14.

Joint work with: Shachar Itzhaky (Tel Aviv University), Anindya Banerjee (IMDEA Software Institute), Neil Immerman (University of Massachusetts), Aleksandar Nanevski (IMDEA Software Institute), Mooly Sagiv (Tel Aviv University)

Speaker's bio:

-



Mobile multi-cores: power and performance
Aaron Carroll | NICTA, Sydney

2014-04-23, 11:00 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

The modern smartphone is a power hungry device, and due largely to the proliferation of multi-core CPUs, the applications processor is one of the main energy drivers. Indeed, a current-generation mobile multi-core can drain your battery in well under one hour. In this talk I will discuss how to manage mobile multi-cores more effectively. I will start with some ground-truth about the energy consumption inside a smartphone, based on circuit-level measurements of real devices. Noting that the applications processor continues to be relevant, I will discuss some techniques for regulating multi-core CPU power, showing the importance of static power and its surprising variance among devices. Finally, I will conclude the talk discussing current and future work on exploring alternate ways to utilize the parallelism available in mobile multi-cores.

Speaker's bio:

Aaron Carroll is a final-year PhD student in the Software Systems Research Group at NICTA and UNSW in Sydney, Australia, under the supervision of Prof. Gernot Heiser. His research interests include mobile and embedded systems, power management and measurement, operating systems, and multi-core performance, all with a strong emphasis on practicality. He is currently interning in the Efficient Computing group at Rice University, where he is exploring ways to improve performance on multi-core processors beyond thread-level parallelism.



Practical Real-Time with Look-Ahead Scheduling
Dr. Michael Roitzsch | TU Dresden

2014-04-23, 10:00 - 11:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

From video and music to responsive UIs, a lot of real-time workloads run on today's desktops and mobile devices, but real-time scheduling interfaces in commodity operating systems have not gained traction. As a result, the CPU scheduler receives no explicit knowledge about applications' needs and thus falls back to heuristics or best-effort operation. I present ATLAS - the Auto-Training Look-Ahead Scheduler. ATLAS improves service to applications with regard to two non-functional properties: timeliness and overload detection. ATLAS provides timely service to applications, accessible through an easy-to-use interface. Deadlines specify timing requirements, workload metrics describe jobs. ATLAS employs machine learning to predict job execution times. Deadline misses are detected before they occur, so applications can react early. ATLAS is currently a single-core scheduler, so after presenting the status quo I will discuss the planned multicore extension and the new application scenarios it enables.

Speaker's bio:

-



Increasing security and performance with higher-level abstractions for distributed programming.
Dr. Andrew Myers | Cornell University, Ithaca

2014-04-10, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

Code and data are exchanged and reused freely across the Internet and the Web. But both are vectors for attacks on confidentiality and integrity. Current systems are vulnerable because they're built at too low a level of abstraction, without principled security assurance. The Fabric project has been developing higher-level abstractions that securely support future open, extensible applications. Unlike current Web abstractions, Fabric has a principled basis for security: language-based information flow, which works even for distrusted mobile code. Warranties, a new abstraction for distributed computation, enable scalability even with a strong consistency model that simplifies programmer reasoning.

Speaker's bio:

Bio: Andrew Myers is a Professor in the Cornell University Department of Computer Science in Ithaca, New York, USA. He received his Ph.D. in Electrical Engineering and Computer Science from MIT. Myers is an ACM Fellow. He has received awards for papers appearing in POPL'99, SOSP'01, SOSP'07, CIDR'13, and PLDI'13. He is currently co-Editor-in-Chief for the Journal of Computer Security and serves on the editorial board of ACM Transactions on Computer Systems.



The Cyberspace Battle for Information: Combating Internet Censorship
Amir Houmansadr | University of Texas at Austin

2014-04-07, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The Internet has become ubiquitous, bringing many benefits to people across the globe. Unfortunately, Internet users face threats to their security and privacy: repressive regimes deprive them of freedom of speech and open access to information, governments and corporations monitor their online behavior, advertisers collect and sell their private data, and cybercriminals hurt them financially through security breaches. My research aims to make Internet communications more secure and privacy-preserving. In this talk, I will focus on the design, implementation, and analysis of tools that help users bypass Internet censorship. I will discuss the major challenges in building robust censorship circumvention tools, introduce two novel classes of systems that we have developed to overcome these challenges, and conclude with several directions for future research.

Speaker's bio:

Amir Houmansadr is a postdoctoral scholar at the University of Texas at Austin. He received his Ph.D. from the University of Illinois at Urbana-Champaign in August 2012. Amir’s research revolves around various network security and privacy problems, including Internet censorship circumvention, network traffic analysis, and anonymous communications. He has received several awards for his research, including the Best Practical Paper Award at the IEEE Symposium on Security & Privacy (Oakland) 2013.



Towards a Secure Client-side for the Web Platform
Devdatta Akhawe | UC Berkeley

2014-04-03, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

With the tremendous growth in cloud-based services, the web platform is now easily the most widely used application platform. In this talk, I will present work done we have done at Berkeley towards developing a secure client-side for web applications. I will discuss three directions: secure protocols, secure applications and secure user experience.

First, I will present work on providing a formal foundation for web security protocols. We formalize the typical web attacker model and identify broadly applicable security goals. We also identify an abstraction of the web platform that is amenable to automated analysis yet able to express subtle attacks missed by humans. Using a model checker, we automatically identified a previously unknown flaw in a widely used Kerberos-like authentication protocol for the web.

Second, I will present work on improving assurance in client-side web applications. We identify pervasive over-privileging in client-side web applications and present a new architecture that relies on privilege separation to mitigate vulnerabilities. Our design uses standard primitives and enables a 6x to 10000x reduction in the trusted computing base with less than 13 lines modified.

Lastly, I will present the results of a large-scale measurement study to empirically assess whether browser security warnings are as ineffective as popular opinion suggests. We used Mozilla Firefox and Google Chrome's in-browser telemetry to observe over 25 million warning impressions in situ. Our results demonstrate that security warnings can be effective in practice; security practitioners should not dismiss the goal of communicating security information to end users.

Speaker's bio:

Devdatta is a graduate student at UC Berkeley interested in security of software, with a primary focus on web application security. He is part of Dawn Song's research group at UC Berkeley. Devdatta is also an invited expert on the W3C's Web Application Security Working Group. More details, including how to pronounce his name, are on his homepage: devd.me



Logic-based frameworks for automated verification of programs with dynamically allocated data structures
Cezara Dragoi | IST Austria

2014-03-27, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Dynamically allocated data structures are heavily used in software nowadays, to organize and facilitate the access to data. This poses new challenges to software verification, due to various aspects, either related to the control structure or to the data. In this talk, we present logic-based frameworks for automatic verification of programs manipulating data structures that store unbounded data. First, we are concerned with designing decidable logics, that can be used to automate deductive verification in the Hoare-style of reasoning. This method relies on user provided annotations, such as loop invariants. Second, we introduce static analyses, that can automatically synthesize such annotations. Classically, static analysis has been used to prove correctness of non-functional properties, such as null pointer deference. In this talk, we present static analyses, that can prove complex functional properties describing the values stored in the data structure.

Speaker's bio:

Cezara is a Romanian-born computer scientist living in Austria. She is currently a post-doctoral researcher at IST Austria, in the group of Tom Henzinger. In 2011 she was awarded a Ph.D. from the Department of Computer Science at the University of Paris-Diderot (Paris 7), LIAFA, under the advising of Ahmed Bouajjani. Cezara's research focuses on software verification, and in particular on static analyses techniques for programs with dynamically allocated data structures. She has been a teaching and research assistant at the University of Bucharest and at the Romanian Institute of Mathematics.



Equivalence checking of stack-based infinite-state systems
Stefan Goeller | University of Bremen

2014-03-24, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In the context of program refinement, it is of particular interest to be able to automatically check whether the implementation of a specification before the refinement step behaves equivalently to the implementation after the refinement step.

Since equivalence checking of programs is generally undecidable, one is typically confronted with a trade-off problem. On the one hand, one wishes to carry out the equivalence check fully automatically and therefore needs to model/abstract the program in such a way that equivalence checking remains at least decidable. On the other hand, the modeling/abstraction of the program should be expressive enough to model the program as faithfully as possible.

Infinite-state systems are a suitable means for faithfully modeling computer programs. For instance, the call and return behavior of recursive programs can faithfully be modeled using pushdown systems (the transition graphs induced by pushdown automata).

The focus of my talk will be on some recent work on bisimulation equivalence checking of stack-based infinite-state systems, i.e. infinite-state systems whose states and transitions involve the manipulation of a stack.

I will present in a bit more detail a PSPACE upper bound on deciding bisimulation equivalence of one-counter systems, which are pushdown systems over a unary stack alphabet, and an NLOGSPACE upper bound on bisimulation equivalence of deterministic one-counter systems, closing a long-standing complexity gap. Furthermore I will give some intuition why bisimulation equivalence checking of pushdown systems and higher-order pushdown systems is much more difficult to decide from a computational perspective, being nonelementary and undecidable, respectively. I will conclude with some challenging open problems in this area of research.

Speaker's bio:

Stefan Goeller is a research associate at the University of Bremen. He received his PhD from the University of Leipzig in 2008. His research interests are in the verification of infinite-state systems, logic, and automata theory. In particular, his recent research has focused on the problem of checking bisimulation equivalences between infinite state systems.



Structure and Dynamics of Diffusion Networks
Manuel Gomez Rodriguez | Max Planck Institute for Intelligent Systems

2014-03-13, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Networks represent a fundamental medium for spreading and diffusion of various types of information, behavior and rumors. However, observing a diffusion process often reduces to noting when nodes (people, blogs, etc.) reproduce a piece of information, adopt a behavior, buy a product, or, more generally, adopt a contagion. We often observe where and when but not how or why contagions propagate through a network. The mechanism underlying the process is hidden. However, the mechanism is of outstanding interest in all cases, since understanding diffusion is necessary for predicting meme propagation, stopping rumors, or maximizing sales of a product.

In this talk, I will present a flexible probabilistic model of diffusion over networks that makes minimal assumptions about the physical, biological or cognitive mechanisms responsible for diffusion. This is possible since the model is data-driven and relies primarily on the visible temporal traces that diffusion processes generate. I apply the model to information diffusion among 3.3 million blogs and mainstream media sites during a one year period. The model allows us to predict future events, it sheds light on the hidden underlying structure and temporal dynamics of diffusion, and provides insights into the positions and roles various nodes play in the diffusion process.

Speaker's bio:

Manuel Gomez Rodriguez is a Research Scientist at Max Planck Institute for Intelligent Systems. Manuel develops machine learning and large-scale data mining methods for the analysis and modeling of large real-world networks and processes that take place over them. He is particularly interested in problems motivated by the Web and social media and has received several recognitions for his research, including an Outstanding Paper Award at NIPS'13 and a Best Research Paper Honorable Mention at KDD'10. Manuel holds a PhD in Electrical Engineering from Stanford University and a BS in Electrical Engineering from Carlos III University in Madrid (Spain). You can find more about him at http://people.tuebingen.mpg.de/manuelgr/



Can You Hide in an Internet Panopticon?
Bryan Ford | Yale University

2014-03-12, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Many people have legitimate needs to avoid their online activities being tracked and linked to their real-world identities - from citizens of authoritarian regimes, to everyday victims of domestic abuse or law enforcement officers investigating organized crime. Current state-of-the-art anonymous communication systems are based on onion routing, an approach effective against localized adversaries with a limited ability to monitor or tamper with network traffic. In an environment of increasingly powerful and all-seeing state-level adversaries, however, onion routing is showing cracks, and may not offer reliable security for much longer. All current anonymity systems are vulnerable in varying degrees to five major classes of attacks: global passive traffic analysis, active attacks, "denial-of-security" or DoSec attacks, intersection attacks, and software exploits.

The Dissent project is prototyping a next-generation anonymity system representing a ground-up redesign of current approaches. Dissent is the first anonymity and pseudonymity architecture incorporating protection against the five major classes of known attacks. By switching from onion routing to alternate anonymity primitives offering provable resistance to traffic analysis, Dissent makes anonymity possible even against an adversary who can monitor most, or all, network communication. A collective control plane renders a group of participants in an online community indistinguishable even if an adversary interferes actively, such as by delaying messages or forcing users offline. Protocol-level accountability enables groups to identify and expel misbehaving nodes, preserving availability, and preventing adversaries from using denial-of-service attacks to weaken anonymity. The system computes anonymity metrics that give users realistic indicators of anonymity protection, even against adversaries capable of long-term intersection and statistical disclosure attacks, and gives users control over tradeoffs between anonymity loss and communication responsiveness. Finally, virtual machine isolation offers anonymity protection against browser software exploits of the kind recently employed to de-anonymize Tor users. While Dissent is still a proof-of-concept prototype with important functionality and performance limitations, preliminary evidence suggests that it may in principle be possible - though by no means easy - to hide in an Internet panopticon.

Speaker's bio:

Bryan Ford leads the Decentralized/Distributed Systems (DeDiS) research group at Yale University. His work focuses broadly on building secure systems, touching on many particular topics including secure and certified OS kernels, parallel and distributed computing, privacy-preserving technologies, and Internet architecture. He has received the Jay Lepreau Best Paper Award at OSDI, and multiple grants from NSF, DARPA, and ONR, including the NSF CAREER award. His pedagogical achievements include PIOS, the first OS course framework leading students through development of a working, native multiprocessor OS kernel. Prof. Ford earned his B.S. at the University of Utah and his Ph.D. at MIT, while researching topics including mobile device naming and routing, virtualization, microkernel architectures, and touching on programming languages and formal methods.



Can You Hide in an Internet Panopticon?
Bryan Ford | Yale University

2014-03-10, 10:30 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Parallel programming is hard.  Most parallel programs today deal with the complexities of parallel programming, such as scheduling and synchronization, using low-level system primitives such as pthreads, locks, and condition variables.  Although these low-level primitives are flexible, like goto statements, they lack structure and make it difficult for the programmer to reason locally about the program state.  Just as goto has been mostly deprecated in ordinary serial programming in favor of structured control constructs, we can simplify parallel programming by replacing these low-level concurrency primitives with linguistic constructs that enable well structured parallel programs.  Of course, elegant linguistic structure is only half the battle.  The underlying system must also efficiently support the linguistics, allowing the programmer to write fast code.

I have developed several new parallel linguistic constructs and devised new, more efficient runtime support for other constructs invented by others.  In this talk, I will focus largely on one example: a pipe_while construct that supports pipeline parallelism, a programming pattern commonly used in streaming applications.   This example provides a case study of how well-designed linguistics for structured parallel programming can simplify parallel programming while allowing the runtime system to execute the linguistic model efficiently.  This work has had some impact since its publication --- Intel released an experimental branch of Cilk Plus that incorporates support for parallel pipelining based on this work.  I will also mention other examples from my research to demonstrate how novel mechanisms in operating systems and hardware, not just the runtime, can help provide efficient support for parallel-programming linguistics.

Speaker's bio:

I-Ting Angelina Lee is a postdoctoral associate in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, working with Prof. Charles E. Leiserson. Her areas of interest include designing linguistics for parallel programming, developing runtime system support for multithreaded software, and building novel mechanisms in operating systems and hardware to efficiently support parallel abstractions.  Her work on "memory-mapped reducers" won best paper at SPAA 2012.  She received her Ph.D. from MIT in 2012 under the supervision of Prof. Charles E. Leiserson.  She received her Bachelor of Science in Computer Science from UC San Diego in 2003.



On the importance of Internet eXchange Points for today's Internet ecosystem
Anja Feldmann | Telekom Innovation Laboratories, TU Berlin

2014-03-06, 14:15 - 15:15
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Internet eXchange Points (IXPs) are generally considered to be the successors of the four Network Access Points that were mandated as part of the decommissioning of the NSFNET in 1994/95 to facilitate the transition to "public Internet" as we know it today. While this does not tell the whole story of the early beginnings, what is true is that since around 1994, the number of IXPs has grown to more than 300 with the largest IXPs handling traffic volumes comparable to those of Tier-1 ISPs. But IXPs have never attracted much attention from the research community. At first glance, this lack of interest seems understandable as IXPs have apparently little to do with current "hot" topic areas such as data centers and cloud services or software defined networking (SDN) and mobile communication. However, we argue that, in fact, IXPs are not only cool monitoring points with huge visibility but are all about Internet connectivity data centers and cloud services and even SDN and mobile communication. To this end, we in this talk start with an overview of the basic technical and operational aspects of IXPs and then highlight some of our research results regarding application mix, AS-graph, Internet infrastructure distribution, and traffic flows.

Speaker's bio:

-



Machine Learning for Social Systems: Modeling Opinions, Activities, and Interactions
Julian McAuley | Stanford University

2014-03-06, 11:30 - 13:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The proliferation of user-generated content on the web provides a wealth of opportunity to study humans through their online traces. I will discuss three aspects of my research, which aims to model and understand people's behavior online. First, I will develop rich models of opinions by combining structured data (such as ratings) with unstructured data (such as text). Second, I will describe how preferences and behavior evolve over time, in order to characterize the process by which people "acquire tastes" for products such as beer and wine. Finally, I will discuss how people organize their personal social networks into communities with common interests and interactions. These lines of research require models that are capable of handling high-dimensional, interdependent, and time-evolving data, in order to gain insights into how humans behave.

Speaker's bio:

Julian McAuley is a postdoctoral scholar at Stanford University, where he works with Jure Leskovec on modeling the structure and dynamics of social networks. His current work is concerned with modeling opinions and behavior in online communities, especially with respect to their linguistic and temporal dimensions. Previously, Julian received his PhD from the ANU under Tiberio Caetano, with whom he worked on inference and learning in structured output spaces. His work has been featured in Time, Forbes, New Scientist, and Wired, and has received over 30,000 "likes" on Facebook.



How to find a good program abstraction automatically?
Hongseok Yang | University of Oxford

2014-03-04, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

Recent years have seen the development of successful commercial programming tools based on static analysis technologies, which automatically verify intended properties of programs or find tricky bugs that are difficult to detect by testing techniques. One of the key reasons for this success is that these tools use clever strategies for abstracting programs -- most details about a given program are abstracted away by these strategies, unless they are predicted to be crucial for proving a given property about the program or detecting a type of program errors of interest. Developing such a strategy is, however, nontrivial, and is currently done by a large amount of manual engineering efforts in most tool projects. Finding a good abstraction strategy automatically or even reducing these manual efforts involved in the development of such a strategy is considered one of the main open challenges in the area of program analysis.

In this talk, I will explain how I tried to address this challenge with colleagues in the US and Korea in the past few years. During this time, we worked on parametric program analyses, where parameters for controlling the degree of program abstraction are expressed explicitly in the specification of the analyses. For those analyses, we developed algorithms for searching for a desired parameter value with respect to a given program and a given property, which use ideas from the neighboring areas of program analysis such as testing, searching and optimisation. In my talk, I will describe the main ideas behind these algorithms without going into technical details. I will focus on intuitions about why and when these algorithms work. I will also talk briefly about a few lessons that I learnt while working on this problem.

This talk is based on the joint work with Mayur Naik, Xin Zhang, Ravi Mangal, Radu Grigore, Hakjoo Oh, Wonchan Lee, Kihong Heo, and Kwangkeun Yi.

Speaker's bio:

-



Automating Construction of Provably Correct Software
Prof. Viktor Kuncak | EPFL, Switzerland

2014-02-27, 10:30 - 11:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 111

Abstract:

I will present techniques my research group has been developing to transform reusable software specifications, suitable for users and designers, into executable implementations, suitable for efficient execution. I outline deductive synthesis techniques that transform input/output behavior descriptions (such as postconditions, invariants, and examples) into conventional functions form inputs to outputs. We have applied these techniques to complex functional data structures, out of core database algorithms, as well as numerical computations.

Speaker's bio:

Bio: Viktor Kuncak is Associate Professor in the School of Computer and Communication Sciences of the Swiss Federal Institute of Technology, Lausanne. His research goal is to increase software development productivity and software reliability through new algorithms and tools for synthesis, analysis, and automated reasoning. In 2012 he received an ERC grant to develop the concept of Implicit Programming, whose aim to make programming easier and more accessible. He also received a SIGSOFT distinguished paper award and his work was also published as a Communications of ACM Research Highlight. He has been a program chair of the conferences Verification, Model Checking and Abstract Interpretation (2012), as well as Formal Methods in Computer-Aided Design (2014).



Privacy Preserving Technologies and an Application to Public Transit Systems
Foteini Baldimtsi, | Brown University

2014-02-26, 11:30 - 12:30
Saarbrücken building E1 5, room 029

Abstract:

Ubiquitous electronic transactions give us efficiency and convenience, but introduce security and reliability issues and affect user privacy. Consider for example how much more private information is revealed during online shopping compared to what leaks in physical transactions that are paid in cash. Luckily, cryptographic research gives us the tools to achieve efficient and secure electronic transactions that at the same time preserve user privacy. Anonymous credentials is one such tool that allows users to prove possession of credentials while revealing the minimum amount of information required. In the first part of this talk, we present "Anonymous Credentials Light": the first provably secure construction of anonymous credentials that is based on the DDH assumption and can work in the elliptic group setting without bilinear pairings. Our construction requires just a few exponentiations in a prime-order group in which the DDH problem is hard, which makes it suitable for mobile devices, RFIDs and smartcards.

In the second part of the talk we explain how we can get secure e-cash with attributes from our construction and we show implementation results in an NFC enabled smartphone. The efficiency of our scheme is comparable to Brands e-cash, which is known to be the most efficient e-cash scheme in the literature but as a recently work of us shows it is impossible to prove it secure under the currently known techniques.

Speaker's bio:

-



Decidable Verification of Database-powered Business Processes
Prof. Alin Deutsch | University of California, San Diego

2014-02-24, 10:30 - 11:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

This talk addresses the static verification problem for data-centric business processes, using as vehicle the "business artifact" model recently deployed by IBM in commercial products and consulting services, and studied in an increasing line of research papers.

Artifacts are records of variables that correspond to business-relevant objects and are updated by a set of services equipped with pre-and post-conditions that implement business process tasks. For the purpose of this talk, the verification problem consists in statically checking whether all runs of an artifact system satisfy desirable properties expressed in (some first order extension of) a temporal logic.

The talk surveys results that identify various practically significant classes of business artifact systems with decidable verification problem. The talk is based on a series of prior and current work conducted jointly with Elio Damagio, David Lorant, Yuliang Li, Victor Vianu (UCSD), and Richard Hull (IBM TJ Watson).

Speaker's bio:

Alin Deutsch is a professor of computer science at the University of California, San Diego. His research is motivated by the data management challenges raised by applications that are powered by underlying databases (viewed in a broad sense that includes traditional database management systems but also collections of semi- and un-structured data providing a query interface, however rudimentary).

Alin's education includes a PhD degree from the University of Pennsylvania, an MSc degree from the Technical University of Darmstadt (Germany) and a BSc degree from the Polytechnic University Bucharest (Romania). He is the recipient of a Sloan fellowship and an NSF CAREER award, and has served as PC chair of the ICDT-2012 International Conference on Database Theory, the PLANX-2009 Workshop on Programming Language Techniques for XML, and the WebDB-2006 International Workshop on the Web and Databases.



Operating System Services for High-Throughput Accelerators
Mark Silberstein | Technion, Israel Institute of Technology

2014-02-17, 14:00 - 15:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Future applications will need to use programmable high-throughput accelerators like GPUs to achieve their performance and power goals. However building efficient systems that use accelerators today is incredibly difficult. I argue that the main problem lies in the lack of appropriate OS support for accelerators -- while OSes provide optimized resource management and I/O services to CPU applications, they make no such services available to accelerator programs.

I propose to build an operating system layer for GPUs which provides I/O services via familiar OS abstractions directly to programs running on GPUs. This layer effectively transforms GPUs into first-class computing devices with full I/O support, extending the constrained GPU-as-coprocessor programming model.

As a concrete example I will describe GPUfs, a software layer which enables GPU programs to access host files. GPUfs provides a POSIX-like API, exploits parallelism for efficiency, and optimizes for access locality by extending a CPU buffer cache into physical memories of all GPUs and CPUs in a single machine. Using real benchmarks I will show that GPUfs simplifies the development of efficient applications by eliminating the GPU management complexity, and broadens the range of applications that can be accelerated by GPUs. For example, a simple self-contained GPU program which searches for a set of strings in the entire tree of Linux kernel source files completes in about third of the time of an 8-core CPU run.

I will then describe my ongoing work on native network support for GPUs, current open problems and future directions.

The talk is self-contained, no background in GPU computing is necessary.

This is a joint work with Emmett Witchel, Bryan Ford, Idit Keidar and UT Austin students.

Speaker's bio:

Mark Silberstein is an Assistant Professor at the Electrical Engineering Department, Technion, Israel. He truly believes that building practical, programmable and efficient computer systems with computational accelerators requires cross-cutting changes in system interfaces, OS design, hardware mechanisms, storage and networking services, as well as programming models and parallel algorithms, all of which constitute his research interests, keep him busy, and excited.

Web page: https://sites.google.com/site/silbersteinmark



Real-time Scheduling and Mixed-Criticality Systems
Sanjoy Baruah | University of North Carolina at Chapel Hill

2014-02-10, 14:00 - 15:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In the context of computer systems, scheduling theory seeks to enable the efficient utilization of computing resources in order to optimize specified system-wide objectives. In this presentation we will examine how real-time scheduling theory is dealing with the recent trend in embedded systems towards implementing functionalities of different levels of importance, or criticalities, upon a shared platform. We will explore the factors that motivated this trend towards mixed-criticality (MC) systems, discuss how these MC systems pose new challenges to real-time scheduling theory, and describe how real-time scheduling theory is responding to these challenges by devising new models and methods for the design and analysis of MC systems.

Speaker's bio:

Sanjoy Baruah is a professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He received his Ph.D. from the University of Texas at Austin in 1993. His research and teaching interests are in scheduling theory, real-time and safety-critical system design, and resource-allocation and sharing in distributed computing environments.



Components for Building Secure Decentralized Networks
Christian Grothoff | TU Muenchen

2014-01-23, 16:00 - 17:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

This talk will present technical solutions towards a secure and fully decentralized future Internet, with focus on the GNU Name System. The GNU Name System is a new decentralized public key infrastructure which we propose as an alternative to DNS(SEC) and X.509, in particular for online social networking applications. I will also give an overview of the GNUnet architecture and give pointers to ongoing related research.

Speaker's bio:

Christian Grothoff is currently on the faculty of the Technische Universitaet Muenchen leading an Emmy-Noether research group in the area of computer networks. He earned his PhD in computer science from UCLA, an M.S. in computer science from Purdue University, and both a Diplom II in mathematics and the first Staatsexamen in chemistry from the Bergische Universitaet Gesamthochschule (BUGH) Wuppertal. His research interests include compilers, programming languages, software engineering, networking and security



Verifying Probabilistic Programs
Stefan Kiefer | University of Oxford

2013-12-16, 11:00 - 12:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

I am going to talk about two approaches to the verification of probabilistic systems: (1) equivalence analysis; and (2) termination analysis.

Deciding equivalence of probabilistic automata is a key problem for establishing various behavioural and anonymity properties of probabilistic systems. In particular, it is at the heart of the tool APEX, an analyser for probabilistic programs. APEX is based on game semantics and analyses a broad range of anonymity and termination properties of randomised protocols and other open programs.

Proving programs terminating is a fundamental computer science challenge. Recent research has produced powerful tools that can check a wide range of programs for termination. The analogue for probabilistic programs, namely termination with probability one (``almost-sure termination''), is an equally important property for randomised algorithms and probabilistic protocols. We have developed a novel algorithm for proving almost-sure termination of probabilistic programs. Our algorithm exploits the power of state-of-the-art model checkers and termination provers for nonprobabilistic programs.

Speaker's bio:

-



Inferring Invisible Internet Traffic
Mark Crovella | Boston University

2013-12-06, 10:15 - 11:15
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

The Internet is at once an immense engineering artifact, a pervasive social force, and a fascinating object of study. Unfortunately many natural questions about the Internet cannot be answered by direct measurement, requiring us to turn instead to the tools of statistical inference. As a detailed example I'll describe a current project in traffic measurement. We are asking the question: using traffic measurements taken at one location in the Internet, can we estimate how much traffic is flowing in a different part of the Internet? Surprisingly, the answer is yes. I'll explain why this is possible (with a connection to problems like the Netflix Prize), how it can be done, and how this result could be used to give a network operator an edge over its competitors.

Speaker's bio:

Mark Crovella is Professor and Chair of the Department of Computer Science at Boston University, where he has been since 1994. He also currently serves as Chief Scientist of Guavus, Inc. During 2003-2004 he was a Visiting Associate Professor at the Laboratoire d'Infomatique de Paris VI (LIP6). He received a B.S. from Cornell University in 1982, and an M.S. from the State University of New York at Buffalo. He received his Ph.D. in Computer Science from the University of Rochester in 1994. From 1984 to 1994 he worked at Calspan Corporation in Buffalo NY, eventually as a Senior Computer Scientist. His research interests center on improving the understanding, design, and performance of parallel and networked computer systems, mainly through the application of data mining, statistics, and performance evaluation. In the networking arena, he has worked on characterizing the Internet and the World Wide Web. He has explored the presence and implications of self-similarity and heavy-tailed distributions in network traffic and Web workloads. He has also investigated the implications of Web workloads for the design of scalable and cost-effective Web servers. In addition he has made numerous contributions to Internet measurement and modeling; and he has examined the impact of network properties on the design of protocols and the construction of statistical models. As of 2013, Google Scholar reports over 19,000 citations to his work. He has given numerous invited talks and tutorials, and is a founder of and consultant to companies involved in Internet technologies. Professor Crovella is co-author of Internet Measurement: Infrastructure, Traffic, and Applications (Wiley Press, 2006) and is the author of over two hundred papers on networking and computer systems. He holds ten patents deriving from his research. Between 2007 and 2009 he was Chair of ACM SIGCOMM. He is a past editor for Computer Communication Review, IEEE/ACM Transactions on Networking, Computer Networks and IEEE Transactions on Computers. He was the Program Chair for the 2003 ACM SIGCOMM Internet Measurement Conference and for IFIP Networking 2010, and the General Chair of the 2005 Passive and Active Measurement Workshop. His paper (with Azer Bestavros) "Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes" received the 2010 ACM SIGMETRICS Test of Time Award, and his paper (with Gonca Gursun, Natali Ruchansky, and Evimaria Terzi) "Routing State Distance: A Path-Based Metric for Network Analysis" won a 2013 IETF/IRTF Applied Networking Research Prize. Professor Crovella is a Fellow of the ACM and the IEEE.



Computer-Aided Cryptographic Analysis and Design
Gilles Barthe | IMDEA Madrid

2013-12-05, 11:00 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

EasyCrypt is a tool supported framework that supports the machine-checked construction and verification of security proofs of cryptographic systems, and that has been used to verify emblematic examples of public-key encryption schemes, digital signature schemes, hash function designs, and block cipher modes of operation. The lecture will motivate the role of computer-aided proofs in the broader context of provable security, explore the connections between provable security and programming language methods, and speculate on potential applications of computer tools in security analysis and design of cryptographic constructions.

Speaker's bio:

Gilles Barthe received a Ph.D. in Mathematics from the University of Manchester, UK, in 1993, and an Habilitation à diriger les recherches in Computer Science from the University of Nice, France, in 2004. He joined the IMDEA Software Institute as a research professor in April 2008. His research interests include programming languages, program verification, software and system security, and cryptography. Since 2006, he has worked on applying formal verification of probabilistic programs to proving security of cryptographic constructions in the computational model, and been involved in the development of several tools based on this approach, including EasyCrypt and ZooCrypt.



An experimentation platform for the Internet's edge
Fabián E. Bustamante | Northwestern University

2013-11-21, 11:00 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Internet measurement and experimentation platforms, such as PlanetLab, have become essential for network studies and distributed systems evaluation. Despite their many benefits and strengths, a by now well-known problem with existing platforms is their inability to capture the geographic and network diversity of the wider, commercial Internet. Lack of diversity and poor visibility into the network hamper progress in a number of important research areas, from network troubleshooting to broadband characterization and Internet topology mapping, and complicate our attempts to generalize from test-bed evaluations of networked systems. The issue has served as motivation for several efforts to build new experimentation platforms and expand existing ones. However, capturing the edge of the network remains an elusive goal. I argue that at its root, the problem is one of incentives. Today's platforms build on the assumption that the goals of experimenters and those hosting the platform are the same. As much of the Internet growth occur in residential broadband and mobile networks, this no longer holds. In this talk, I will present Dasu, a measurement experimentation platform for the Internet's edge that explicitly aligns the objectives of the experimenters with those of the users hosting the platform. Dasu is designed to support both network measurement experimentation and broadband characterization. Dasu has been publicly available since mid 2010 and has been adopted by over 95,000 users, across 150 countries. I will then illustrate the value of Dasu's unique perspective with some of the ongoing projects and collaboration already taking advantage of it.

Speaker's bio:

Fabián E. Bustamante is an associate professor of computer science in the EECS department at Northwestern University. He joined Northwestern in 2002, after receiving his Ph.D. and M.S. in Computer Science from the Georgia Institute of Technology. His research focuses on the measurement, analysis and design of Internet-scale distributed systems and their supporting infrastructure. Fabián is a recipient of the US National Science Foundation CAREER award and the E.T.S. Watson Fellowship Award from the Science Foundation of Ireland, and a senior member of both the ACM and the IEEE. He currently serves in the editorial boards of IEEE Internet Computing and the ACM SIGCOMM CCR, the Steering Committee for IEEE P2P (as chair), and the External Advisory Board for the mPlane initiative. Fabián is also the general co-chair for the ACM SIGCOMM 2014 to be held in Chicago. For more detailed information and a list of publications, please visit: http://www.aqualab.cs.northwestern.edu



From bounded affine types to automatic timing analysis
Dan R. Ghica | University of Birmingham

2013-11-20, 14:00 - 15:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce an affine bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This is a very useful type-theoretic and denotational framework for many applications to resource-sensitive compilation, and it represents a generalization of several existing type systems. As a non-trivial instance, motivated by our ongoing work on hardware compilation, we present a complex new application to calculating and controlling timing of execution in a (recursion-free) higher-order functional programming language with local store.

Dan R. Ghica, Alex Smith

Speaker's bio:

-



Cluster Management at Google
John Wilkes | Google

2013-09-26, 16:00 - 17:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Cluster management is the term that Google uses to describe how we control the computing infrastructure in our datacenters that supports almost all of our external services. It includes allocating resources to different applications on our fleet of computers, looking after software installations and hardware, monitoring, and many other things. My goal is to present an overview of some of these systems, introduce Omega, the new cluster-manager tool we are building, and present some of the challenges that we're facing along the way. Many of these challenges represent research opportunities, so I'll spend the majority of the time discussing those.

Speaker's bio:

John Wilkes has been at Google since 2008, where he is working on cluster management and infrastructure services. He is interested in far too many aspects of distributed systems, but a recurring theme has been technologies that allow systems to manage themselves. In his spare time he continues, stubbornly, trying to learn how to blow glass.



Separation logic, object-orientation and refinement
Stephan van Staden | ETH Zurich

2013-09-19, 13:30 - 15:00
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

-

Speaker's bio:

-



On the Achievability of Simulation-based Security for Functional Encryption
Vincenzo Iovino | Universita di Salerno

2013-09-04, 15:00 - 16:00
Saarbrücken building E1 4, room 024

Abstract:

Let F:K×M→∑ be a functionality, where K is the key space and M is the message space and Σ is the output space.Then a functional encryption (FE) scheme for F is a special encryption scheme in which, for every key k∈K, the owner of the master secret key Msk associated with the public key Pk can generate a special key or "token" Tok that allows the computation of F(k,m) from a ciphertext of m computed under public key Pk. In other words, whereas in traditional encryption schemes decryption is an all-or-nothing affair, in FE it is possible to finely control thea mount of information that is revealed by a ciphertext. Unlike traditional encryption, for FE indistinguishability-security is not equivalent to simulation-based security. This work attempts to clarify to what extent simulation-based security (SIM-security) is achievable for functional encryption and its relation to the weaker indistinguishability-based security (IND-security). Our main result is a compiler that transforms any FE scheme for the general circuit functionality (which we denote by circuit-FE) meeting IND-security to a circuit-FE scheme meeting SIM-security, where: In the random oracle model, the resulting scheme is secure for an unbounded number of encryption and key queries, which is the strongest security level one can ask for. In the standard model, the resulting scheme is secure for a bounded number of encryption and non-adaptive key queries, but an unbounded number of adaptive key queries. This matches known impossibility results and improves upon Gorbunov et al. [CRYPTO'12] (which is only secure for non-adaptive key queries). Our compiler is inspired by the celebrated Fiat-Lapidot-Shamir paradigm [FOCS'90] for obtaining zero-knowledge proof systems from witness-indistinguishable proof systems. We also give a tailored construction of SIM-secure hidden vector encryption (HVE) in composite-order bilinear groups. Finally, we revisit the known negative results for SIM-secure FE, extending them to natural weakenings of the security definition and thus providing essentially a full picture of the achievability of FE. We conclude with open problems and future challenges in the area.

Speaker's bio:

-



Modular Verification of Finite Blocking
Peter Müller | ETH Zürich

2013-08-29, 13:30 - 14:30
Kaiserslautern building G26, room 111 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Finite blocking is an important correctness property of multi-threaded programs. It requires that each blocking operation such as acquiring a lock or joining a thread executes eventually. In addition to deadlock freedom, finite blocking in the presence of non-terminating threads requires one to prove that no thread holds a lock indefinitely or attempts to join a non-terminating thread. In this talk, I will present a verification technique for finite blocking of possibly non-terminating programs. The key idea is to track explicitly whether a thread has an obligation to perform an operation that releases another thread from being blocked, for instance, an obligation to release a lock or to terminate. Each obligation is associated with a lifetime to ensure that it is fulfilled within finitely many steps. Our technique guarantees finite blocking for programs with a finite number of threads and fair scheduling. We have implemented our technique in the automatic program verifier Chalice.

Speaker's bio:

Peter Müller has been Full Professor and head of the Chair of Programming Methodology at ETH Zurich since August 2008. His research focuses on languages, techniques, and tools for the development of correct software. His previous appointments include a position as Researcher at Microsoft Research in Redmond, an Assistant Professorship at ETH Zurich, and a position as Project Manager at Deutsche Bank in Frankfurt. Peter Müller received his PhD from the University of Hagen.



Type Refinements for Compiler Correctness
Robert Harper | Carnegie Mellon University

2013-08-22, 13:00 - 14:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Type refinements, introduced by Freeman and Pfenning and explored by Davies and Dunfield, unify the ontological and epistemic views of typing. Types tell us what programming language constructs exist, whereas refinements express properties of the values of a type.  Here we show that refinements are very useful in compiler correctness proofs, wherein it often arises that two expressions that are inequivalent in general are equivalent in the particular context in which they occur.  Refinements serve to restrict the contexts sufficiently so that the desired equivalence holds.  For example, an expression might be replaced by a more efficient one, even though it is not generally equivalent to the original, but is interchangeable in any context satisfying a specified refinement of the type of those expressions.

We study here in detail a particular problem of compiler correctness, namely the correctness of compiling polymorphism (generics) to dynamic types by treating values of variable type as values of a universal dynamic type. Although this technique is widely used (for example, to compile Java generics), no proof of its correctness has been given to date.  Surprisingly, standard arguments based on logical relations do not suffice, precisely because it is necessary to record deeper invariants about the compiled code than is expressible in their types alone.  We show that refinements provide an elegant solution to this problem by capturing the required invariants so that a critical invertibility property that is false is general can be proved to hold in the contexts that arise in the translated code.  This proof not only establishes the correctness of this compilation method, but also exemplifies the importance of refinements for compiler correctness proofs more generally

Speaker's bio:

-



From Replication to Flexible and Portable Determinism for Java
Joerg Domaschka | Institute for Information Resource Management, Ulm University

2013-08-05, 11:30 - 13:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

Determinism of applications is required in the domains of testing, debugging, and fault-tolerant replication. Literature has presented various ways to render application execution deterministic in either transparent or non-transparent manner. Existing approaches operate on the layers of hardware, operating system, rutime environments, or even inside the application logic. Other systems operate in-between two layers: hypervisors between hardware and operating system; and supervisors between operating systems and applications. In this talk, I focus on the combination of replication and determinism: Applying replication techniques and replication protocols to business applications requires that the logic of the business application behave deterministic, while the other parts of the system such as the replication logic does not. This talk gives a brief introduction to the Java-based Virtual Nodes Replication Framework (VNF). VNF allows a transparent replication of business logic. It enables a late configuration of multiple aspects of a replica group including the particular replication protocol. Deciding late which replication protocol to apply, however, is only possible when the application logic satisfies contraints imposed by the replication protocol. Many replication protocols can only be applied when it is ensured that the application logic behaves deterministically. The Deterministic Java suite (Dj suite) is capable of rendering Java applications fully deterministic. In contrast to other approaches, it is pure Java and hence, portable across JVMs, operating systems, and hardware platforms. Furthermore, it can be selectively applied to only parts of entire Java programmes what makes it suited for replicated scenarios. In addition, the Dj suite comprises a deterministic version of the Java runtime libraries which makes it the only comprehensive, Java-based approach in literature. The capability for a flexible configuration ensures adaptability to many different use cases. In this talk, I give an overview on mechanisms and techniques for making Java applications deterministic. In addition, I sketch how to build a determinitic Java-only runtime system from an existing open source Java implementation. The talk concludes with evaluation results, a discussion of future work, and an outlook on the integration of Dj with VNF as well as the possibility of virtualised Java platforms.

Speaker's bio:

Jörg Domaschka received a diploma in computer science from the university of Erlangen-Nuremberg, Germany in 2005. From 2006 to 2012, he was an assistant researcher in Franz Hauk's research group at the Institute for Distributed Systems, University of Ulm, Germany. From 2012 to 2013 he worked as a business consultant. He received a doctor degree from the University of Ulm in 2013; in May 2013 he returned to academia and joined Stefan Wesner's Institute for Information Resource Management at University of Ulm as a senior researcher. In the past, Jörg's research activity was mainly concerned with distributed, fault-tolerant systems and the automatic provisioning of fault-tolerance and determinism. His interests, however, also cover distributed algorithms, self-adaptability, scalability, as well as programming paradigms for such systems. In the past, he actively participated in the XtreemOS FP6 project where he technically led the group of Univeristy of Ulm. Further, he functioned as a key developer in the project. He is now contributing to the PaaSage FP7 project and will soon start working on the FP7 project CACTOS.



Programming with algebraic effects and handlers in Eff
Andrej Bauer | University of Ljubljana

2013-08-02, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Eff is a programming language in which computational effects and handlers are first-class, so we can define new computational effects and handle existing ones in flexible ways. This allows us to express naturally not only the traditional computational effects, but also control operators, such as delimited continuations, I/O redirection, transactions, backtracking and other search strategies, and cooperative multi-threading.

In the talk I shall first introduce Eff from a practical point of view through examples, and then focus on a more precise treatment of a core Eff. I shall also present an effect system for Eff.

Speaker's bio:

-



Traffic Correlation on Tor by Realistic Adversaries
Aaron Johnson | US Naval Research Laboratory

2013-07-29, 13:00 - 14:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 113

Abstract:

We present the first analysis of the popular Tor anonymity network that indicates the security of typical users against reasonably realistic adversaries in the Tor network or in the underlying Internet. Our results show that Tor users are far more susceptible to compromise than indicated by prior work. Specific contributions include (1) a model of various typical kinds of users, (2) an adversary model that includes Tor network relays, autonomous systems (ASes), Internet exchange points (IXPs), and groups of IXPs drawn from empirical study (3) metrics that indicate how secure users are over a period of time, (4) the most accurate topological model to date of ASes and IXPs as they relate to Tor usage and network configuration, (5) a novel realistic Tor path simulator (TorPS), and (6) analyses of security making use of all the above. To show that our approach is useful to explore alternatives and not just Tor as currently deployed, we also analyze a published alternative path selection algorithm, Congestion-Aware Tor. We create an empirical model of Tor congestion, identify novel attack vectors, and show that it too is more vulnerable than previously indicated.

Speaker's bio:

Aaron Johnson is a computer scientist at the U.S. Naval Research Laboratory. A general theme of his research is designing protocols to provide good, provable tradeoffs between privacy and utility. Specifically, he is working on private data publishing and anonymous communication protocols.



Complex networks approach to modeling online social systems
Przemyslaw Grabowicz | Institute for Cross-Disciplinary Physics and Complex Systems, Palma de Mallorca

2013-07-04, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

An increasing number of today's social interactions occurs using online social media as communication channels. Some online social networks have become extremely popular in the last decade. They differ among themselves in the character of the service they provide to online users. For instance, Facebook can be seen mainly as a platform for keeping in touch with close friends and relatives, Twitter is used to propagate and receive news, LinkedIn facilitates the maintenance of professional contacts, Flickr gathers amateurs and professionals of photography, etc. Albeit different, all these online platforms share an ingredient that pervades all their applications. There exists an underlying social network that allows their users to keep in touch with each other and helps to engage them in common activities or interactions leading to a better fulfillment of the service's purposes. This is the reason why these platforms share a good number of functionalities, e.g., broadcasted status updates, personal communication channels, easy one-step information sharing, groups created and maintained by the users, organized user-generated content etc. As a result, online social networks are an interesting field to study social behavior that seems to be generic among the different online services. Since at the bottom of these services lays a network of declared relations and the basic interactions in these platforms tend to be pairwise, a natural methodology for studying these systems is provided by network science. In this presentation I describe some of the results of my studies about community structure, interaction dynamics and browsing patterns in online social networks. I present them in an interdisciplinary context of network science, sociology and computer science.

The presentation is divided into three main parts, here are the links to our publications related to each of the sections:

Part I: Interaction patterns in the context of social groups http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0029358

Part II: Social and topical groups http://dl.acm.org/citation.cfm?id=2433475

Part III: User browsing patterns and photo recommendation https://docs.google.com/file/d/0B0_e6k3kQKEubEZfZnQwckNjQXM/edit

Speaker's bio:

I am a PhD student at the Institute for Cross-Disciplinary Physics and Complex Systems in Palma de Mallorca.

Main topic of my research interests is social networks, and so far my work concentrates on them. Although I have Master in Physics the research which we perform with my collaborators is truly interdisciplinary, on the frontier between Computer Science, Sociology and Physics.

I have a feeling that my journey with social networks is just spinning off. I had a pleasure to participate in Truthy project at Indiana University, where we developed its real-time movies feature, that later won the WICI Data Challenge for us. Recently I have finished a half year long internship at Yahoo! Research Barcelona in the Social Media Engagement group.



Naiad: a system for iterative, incremental, and interactive distributed dataflow
Frank McSherry | Microsoft Research Silicon Valley

2013-06-24, 14:00 - 15:30
Kaiserslautern building G26, room 113 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In this talk I'll describe the Naiad system, based on a new model for low-latency incremental and iterative dataflow. Naiad is designed to provide three properties we do not think yet exist in a single system: the expressive power of loops, concurrent vertex execution, and fine-grained edge completion. Removing any one of these requirements yields an existing class of solutions (respectively: streaming systems like StreamInsight, iterative incremental systems like Nephele, and callback systems like Percolator), but all three together appear to require a new system design. We will describe Naiad's structured cyclic dataflow model and protocol for tracking and coordinating outstanding work, more closely resembling memory fences than traditional distributed systems barriers. We give several examples of how Naiad can be used to efficiently implement many of the currently popular "big data" programming patterns, as well as several new ones, and experimental results indicating that Naiad's relative performance ranges from "as good as" to "much better than" existing systems.

This is joint work with Derek G. Murray, Rebecca Isaacs, Michael Isard, Paul Barham, and Martin Abadi.

Speaker's bio:

Frank joined the MSR Silicon Valley lab in 2002, immediately after completing his graduate degree with Anna Karlin at the University of Washington. His interests are in Privacy and large scale Data Mining and Analysis, and the theoretical and practical issues surrounding them. In particular, he helped to develop the recent definition of Differential Privacy, and has been designing and implementing the Privacy Integrated Queries data analysis platform providing these guarantees.



Dealing with Resource Allocations and Performance Interference in Virtualized Environments
Nedeljko Vasic | EPFL

2013-06-03, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Cloud computing in general and Infrastructure-as-a-Service (IaaS) in particular, are becoming ever more popular. However, effective resource management of virtualized resource is a challenging task. Moreover, performance interference (and the resulting unpredictability in the delivered performance) across virtual machines co-located on the same physical machine threatens to make cloud computing inadequate for performance-sensitive customers and more expensive than necessary for all customer.

In this talk, I will describe two frameworks - DejaVu and DeepDive - for dealing with the resource management and performance interference issues in virtualized environments. The key idea behind DejaVu is to cache and reuse the results of previous resource allocation decisions at runtime. By doing so, it speeds up adaptation to workload changes by 18X relative to the state-of-the-art. DeepDive transparently diagnoses and manages performance interference in the cloud by leveraging easily-obtainable low level metrics to discern when interference is occurring and what resource is causing it. DeepDive also mitigates interference using a low-overhead approach to identifying a VM placement that alleviates interference.

Speaker's bio:

Nedeljko Vasic received his Ph.D. from the School of Computer Sciences and Communications, from EPFL, Switzerland, and his MSc degree in school of Computer Science and Automatics from University of Novi Sad (FTN), Serbia. In 2010, he held a position at IBM Research, Zurich. Since May 2011, he has been working as a post-doctoral researcher at the Networked Systems Laboratory, and Operating Systems Laboratory, EPFL. His main interests are in: i) performance evaluation and resource management in virtualized environments, and ii) Internet and data center architectures that result in better performance, energy-efficiency, and elasticity. Nedeljko is a recipient of: i) a prestigious award for the best graduated student of University of Novi Sad (Republic of Serbia), ii) the Best Paper Award at COMSNETS 2009, iii) an IBM PhD Fellowship in 2010, and iv) an Honorable Mention in the 2012 EuroSys Roger Needham PhD Award competition for the best systems PhD in Europe.



Logical Abstractions of Systems
Thomas Wies | NYU

2013-05-16, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Verification technology has proved useful for increasing software reliability and productivity. Its great asset lies in concepts and algorithms for the mechanical construction of formally precise, conservative abstractions of behavioral aspects of systems. Its applicability extends to other problems in Computer Science. A recent example is our uses of abstraction algorithms for automated debugging of programs. In my talk, I will present techniques for computing and analyzing abstractions of systems whose behavior depends on complex data and control, such as heap-allocated data structures and distributed message passing concurrency. Our techniques are based on decision procedures for specific logical theories. The resulting logical abstractions give rise to a new generation of verification tools.

Speaker's bio:

Thomas Wies is an Assistant Professor in the NYU Computer Science Department and a member of the Analysis of Computer Systems Group. He received his doctorate in Computer Science from the University of Freiburg, Germany (2009). Before joining NYU, he held post-doctoral positions at École Polytechnique Fédérale de Lausanne, Switzerland and at the Institute of Science and Technology Austria.



Proof-relevant logical relations
Martin Hofmann | Ludwig-Maximilians-Universitaet Muenchen

2013-05-13, 14:00 - 15:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

We introduce a novel variant of logical relations that maps types not merely to partial equivalence relations on values, as is commonly done, but rather to a proof-relevant generalisation thereof, namely setoids.

A setoid is like a category all of whose morphisms are isomorphisms (a groupoid) with the exception that no equations between these morphisms are required to hold.

The objects of a setoid establish that values inhabit semantic types, whilst its morphisms are understood as evidence for semantic equivalence.

The transition to proof-relevance solves two well-known problems caused by the use of existential quantification over future worlds in traditional Kripke logical relations: failure of admissibility, and spurious functional dependencies.

We illustrate the novel format with two applications: a direct-style validation of Pitts and Stark's equivalences for ``new'' and a denotational semantics for a region-based effect system that supports type abstraction in the sense that only externally visible effects need to be tracked; non-observable internal modifications, such as the reorganisation of a search tree or lazy initialisation, can count as `pure' or `read only'. This `fictional purity' allows clients of a module soundly to validate more effect-based program equivalences than would be possible with traditional effect systems.

This is joint work with Nick Benton and Vivek Nigam

Speaker's bio:

-



Gender Swapping and User Behaviors in Online Social Games
Meeyoung Cha | KAIST

2013-05-10, 14:30 - 16:00
Saarbrücken building E1 5, room 005 / simultaneous videocast to Kaiserslautern building G26, room 112

Abstract:

Modern Massively Multiplayer Online Role-Playing Games (MMORPGs) provide lifelike virtual environments in which players can conduct a variety of activities including combat, trade, and chat with other players. While the game world and the available actions therein are inspired by their offline counterparts, the games’ popularity and dedicated fan base are testament to the allure of novel social interactions granted to people by granting them an alternative life as new characters and personae. In this paper we investigate the phenomenon of "gender swapping," which refers to players choosing avatars of genders opposite to their natural ones. We report the behavioral patterns observed in players of Fairyland Online, a globally serviced MMORPG, during social interactions when playing as in-game avatars of their own real gender or gender-swapped, and discuss the effect of gender role and self-image in virtual social situations and the potential of our study for improving MMORPG qualities and detection of fake online identities.

(To appear at WWW2013, Joint work with Jing-Kai Lou, Kunwoo Park,Juyong Park, Chin-Laung Lei, and Kuan-Ta Chen)

Speaker's bio:

Meeyoung Cha is an assistant professor at Graduate School of Culture Technology in KAIST. Meeyoung received a Ph.D. degree in Computer Science from KAIST in 2008. Previously, she was a post-doctral researcher at Max Planck Institute for Software Systems in Germany. Her research interests are in the analysis of large-scale online social networks with emphasis the spread of information, moods, and user influence. She received the best paper award from Usenix/ACM SIGCOMM Internet Measurement Conference 2007 for her work on YouTube. Her recent work on the user influence in Twitter has been featured on New York Times websites and Harvard Business Review's research blog. Her research has been published in leading journals and conferences including PLoS One, Information Sciences, WWW, and ICWSM.



Understanding and Improving the Efficiency of Failure Resilience for Big Data Frameworks
Florin Dinu | Rice University

2013-04-23, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Big data processing frameworks (MapReduce, Hadoop, Dryad) are hugely popular today. A strong selling point is their ability to provide failure resilience guarantees. They can run computations to completion despite occasional failures in the system. However, an overlooked point has been the efficiency of the failure resilience provided. The vision of this work is that big data frameworks should not only finish computations under failures but minimize the impact of the failures on the computation time.

The first part of the talk presents the first in-depth analysis of the efficiency of the failure resilience provided by the popular Hadoop framework at the level of a single job. The results show that compute node failures can lead to variable and unpredictable job running times. The causes behind these results are detailed in the talk. The second part of the talk focuses on providing failure resilience at the level of multi-job computations. It presents the design, implementation and evaluation of RCMP, a MapReduce system based on the fundamental insight that using replication as the main failure resilience strategy oftentimes leads to significant and unnecessary increases in computation running time. In contrast, RCMP is designed to use job re-computation as a first-order failure resilience strategy. RCMP enables re-computations that perform the minimum amount of work and also maximizes the efficiency of the re-computation work that still needs to be performed.

Speaker's bio:

Florin Dinu is a final year graduate student in the Systems Group at Rice University, Houston, TX. He is advised by Prof. T. S. Eugene Ng. Before joining Rice in 2007, he received a B.A. in Computer Science from Politehnica University Bucharest in 2006 and then worked as a junior researcher at the Fokus Fraunhofer Institute in Berlin, Germany. His Ph.D. dissertation focuses on the efficiency of failure resilience in big data processing frameworks. He has also done work on the benefits of centralized network control, congestion inference and improving data transfers for big data computations.



Mining Requirements from an Industrial-scale Control System
Jyotirmoy Deshmukh | Toyota Engineering

2013-04-22, 14:00 - 15:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Industrial-scale control systems are often developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain highly nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, and so on. Moreover, the biggest challenge is that system requirements are often imprecise, non-modular, evolving, or even simply unknown. As a result, formal techniques have been unable to digest the scale and format of industrial-scale control systems. On the other hand, the Simulink modeling language -- a popular language for describing such models -- is widely used as a high-fidelity simulation tool in the industry, and is routinely used by control designers to experimentally validate their controller designs. This raises the question: "What can we accomplish if all we have is a very complex Simulink model of a control system?" In this talk, we give an example of a simulation-guided formal technique that can help characterize temporal properties of the system, or guide the search for design behaviors that do not conform to "good behavior". Specifically, we present a way to algorithmically mine temporal assertions from a Simulink model. The input to our algorithm is a requirement template expressed in Parametric Signal Temporal Logic -- a formalism to express temporal formulas in which concrete signal or time values are replaced by parameters. Our algorithm is an instance of counterexample-guided inductive synthesis: an intermediate candidate requirement is synthesized from simulation traces of the system, which is refined using counterexamples to the candidate obtained with the help of a falsification tool. The algorithm terminates when no counterexample is found. Mining has many usage scenarios: mined requirements can be used to validate future modifications of the model, they can be used to enhance understanding of legacy models, and can also guide the process of bug-finding through simulations.

Speaker's bio:

-



Liveness-Based Pointer Analysis
Uday Khedkar | IIT Bombay

2013-04-19, 15:00 - 16:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Precise flow- and context-sensitive pointer analysis (FCPA) is generally considered prohibitively expensive for arge programs; most tools relax one or both of the requirements for scalability. We argue that precise FCPA has been over-harshly judged---the vast majority of points-to pairs calculated by existing algorithms are never used by any client analysis or transformation because they involve dead variables. We therefore formulate a FCPA in terms of a joint points-to and liveness analysis which we call L-FCPA.

Our analysis computes points-to information only for live pointers and its propagation is sparse (restricted to live ranges of respective pointers). Further, our analysis uses strong liveness, effectively including dead code elimination. It calculates must-points-to information from may-points-to information afterwards instead of using a mutual fixed-point, and uses value-based termination of call strings during interprocedural analysis (which reduces the number of call strings significantly).

We implemented a naive L-FCPA in GCC-4.6.0 using linked lists. Evaluation on SPEC2006 showed significant increase in the precision of points-to pairs compared to GCC's analysis. Interestingly, our naive implementation turned out to be faster than GCC's analysis for all programs under 30kLoC. Further, L-FCPA showed that fewer than 4% of basic blocks had more than 8 points-to pairs. We conclude that the usable points-to information and the required context information is small and sparse and argue that approximations (e.g. weakening flow or context sensitivity) are not only undesirable but also unnecessary for performance.

Speaker's bio:

-



Synthesis and Control of Infinite-State Systems with Partial ObservabilitySpeaker
Rayna Dimitrova | UdS

2013-04-18, 11:00 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Synthesis methods automatically construct a system, or an individual component within a system, such that the result satisfies a given specification. The synthesis procedure must take into account the component's interface and deliver implementations that comply with its limitations. For example, what a component can observe about its environment may be restricted by imprecise sensors or inaccessible communication channels. In addition, sufficiently precise models of a component's environment are typically infinite-state, for example due to modeling real time or unbounded communication buffers. In this talk I will present synthesis methods for infinite-state systems with partial observability. First, I will describe a technique for automatic generation of observation predicates (clock constraints) for timed control with safety requirements. Finding the right observations is the main challenge in timed control with partial observability. Our approach follows the Counterexample-Guided Abstraction Refinement scheme, i.e., it uses counterexamples to guide the search. It employs novel refinement techniques based on interpolation and model generation. Our approach yields encouraging results, demonstrating better performance than brute-force enumeration of observation sets, in particular for systems where a fine granularity of the observations is necessary. Second, I will outline a synthesis algorithm for Lossy Channel Systems (LCSs) with partial observability and safety specifications. The algorithm uses an extension of the symbolic representation common for backward analysis of LCSs. Its termination relies on the fact that LCSs are better-quasi ordered systems.

Speaker's bio:

Rayna Dimitrova is a PhD candidate at Saarland University working with Prof Bernd Finkbeiner



Securing information release: systems, models, and programming languages
Aslan Askarov | Harvard University

2013-04-08, 10:30 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Computer systems sometimes need to release some confidential information. However, they must also prevent inadvertent release of information that should remain confidential. These requirements significantly complicate reasoning about system security, and are not addressed by conventional security mechanisms. To provide assurance for such systems we need to develop principled approaches for specifying and enforcing secure information release. In this talk, I will describe how this can be achieved using systems and programming languages techniques.

The first part of the talk will focus on controlling inadvertent leaks in complex systems. I will discuss the leaks that happen when an adversary can measure the time at which a system performs an observable action, also known as timing channels. I will explain how timing channels present a serious threat in computer security, and introduce predictive mitigation---a general technique for mitigating timing channels that works by predicting timing from past behavior and public information. Rather than eliminating timing channels entirely, predictive mitigation bounds the amount of information that an adversary can learn via timing channels with a trade-off in system performance. Under reasonable assumptions, the bounds are logarithmic in the running time of the system.

The second part of the talk will present insights into the formalization of practical security specifications for the intentional release of confidential information. I will introduce a programming language-based framework that provides a formal vocabulary for expressing such specifications. Example specifications include what information may be released, when a release may happen, and whether an adversary has any control over a release. These specifications are soundly enforceable using a variety of static and dynamic program analyses.

Speaker's bio:

Aslan Askarov is currently a postdoctoral fellow at Harvard University, and was previously a postdoctoral associate at Cornell University. He received a PhD from Chalmers University of Technology in Gothenburg, Sweden in 2009. Aslan's research interests include computer security, programming languages, and systems.



Verifying shared-variable concurrent programs
Daniel Kroening | Oxford

2013-04-04, 11:00 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

I will first outline a series of research results on verifying shared-variable concurrent programs, including techniques for predicate abstraction for such programs, and how to check the resulting concurrent Boolean programs. I will then elaborate on two recent results on supporting weak memory consistency (ESOP and CAV 2013, respectively).

Speaker's bio:

-



Diagnosing and Repairing Internet Performance Problems
David Choffnes | University of Washington

2013-03-21, 10:30 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

We increasingly depend on Internet connectivity and performance for services ranging from telephony and video streaming to home monitoring and remote health care. However, hardware failures, misconfigurations, and software bugs frequently cause outages and other performance problems that disrupt these services. Existing tools provide network operators with only limited visibility into these problems and few options to address them. The result is that debugging Internet problems is often a slow, manual process. In this talk, I discuss how we can improve Internet reliability by enabling better tools for detecting, isolating, and repairing network problems as they occur. First, I discuss a system for crowdsourcing network monitoring to end. By leveraging the network view from applications running on a large number of hosts, we can efficiently detect network problems that impact end-to-end performance. Second, I describe a system for isolating the network responsible for a problem. To address this, we develop new tools that allow an ISP to identify the root cause even when portions of the Internet are unreachable. Third, I present an approach for automatically repairing isolated network problems. This allows an ISP to use existing routing protocols in novel ways to cause other networks to avoid problems, thus restoring normal connectivity.

Speaker's bio:

David Choffnes earned his Ph.D. in Computer Science from Northwestern University in 2010 and is currently a postdoctoral research associate at the University of Washington. His research interests are primarily in the areas of distributed systems and networking, with a recent focus on mobile systems. He has coauthored three textbooks in computer science and programming, and has been awarded the CRA/NSF Computing Innovation Fellowship as well as the Outstanding Dissertation Award from Northwestern University



Holistic System Design for Deterministic Replay
Dongyoon Lee | University of Michigan

2013-03-12, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

With the advent of multiprocessor systems, it is now the role of the programmers to explicitly expose parallelism and take advantage of parallel computing resources. However, parallel programming is inherently complex as programmers have to reason about all possible thread interleavings. A deterministic replay system that records and reproduces the execution of parallel programs can serve as a foundation for building many useful tools (e.g., time-travel debugger, fault tolerance system, etc.) by overcoming the inherent non-determinism in multiprocessor systems. While it is well known how to replay uniprocessor systems, it is much harder to provide deterministic replay of shared memory multithreaded programs on multiprocessors because shared memory accesses add a high-frequency source of non-determinism. I introduce a new insight to deterministic replay that it is sufficient for many replay uses to guarantee only the same output and the final states between the recorded and replayed executions, and thus it is possible to support replay without logging precise shared-memory dependencies. I call this relaxed but sufficient replay guarantee "external determinism" and leverage this observation to build efficient multiprocessor replay systems. In this talk, I will introduce two replay systems: Respec and Chimera. Respec enables software-only deterministic replay at low overhead with operating system support. Chimera leverages static data-race analysis to build an efficient software-only replay solution.

Speaker's bio:

Dongyoon is currently a PhD candidate in the EECS department at the University of Michigan, Ann Arbor. He received the M.S. degree in computer science and engineering from the University of Michigan, Ann Arbor, in 2009 and the B.S. degree in electronic engineering from Seoul National University, Korea, in 2004. He has worked at the intersection of operating systems, computer architecture, and dynamic/static program analysis, with a focus on developing practical solutions to improve programmability, reliability and security of parallel programs. He has been awarded VMware 2012 graduate fellowship, Best Paper at ASPLOS 2011, and Grand Prize in embedded software contest held in Korea.



Practical, Usable, and Secure Authentication and Authorization on the Web
Alexei Czeskis | University of Washington, Seattle

2013-03-07, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

User authentication is a critical part of many systems.  As strong cryptography has become widespread and vulnerabilities in systems become harder to find and exploit, attackers are turning toward user authentication as a potential avenue for compromising users.  Unfortunately, user authentication on the web has remained virtually unchanged since the invention of the Internet.  I will present three systems that attempt to strengthen user authentication, and its close cousin authorization, on the web while being practical for developers, usable for users, and secure against attackers.  First, I will discuss Origin Bound Certificates -- a mechanism for tweaking Transport Layer Security (TLS) that can then be used to strongly strengthen the authentication of HTTP requests by binding cookies (or other tokens) to a client certificate.  This renders stolen cookies unusable by attackers. Second, I will present PhoneAuth, a system for protecting password-based login by opportunistically providing cryptographic identity assertions from a user's mobile phone while maintaining a simple and usable authentication experience.  Third, I will describe ongoing research into how a class of web vulnerabilities called Cross-Site Request Forgeries (CSRFs) can be fundamentally prevented using Allowed Referrer Lists.  I'll discuss the next big challenges in user authentication and conclude with several examples of where authentication matters beyond the web.

Speaker's bio:

Alexei Czeskis is a 5th year PhD student at the Security and Privacy Research Lab at the University of Washington Department of Computer Science and Engineering. His primary research is focused on authentication – one of the most important, yet challenging aspects of computer security. Alexei is interested in user authentication in highly adversarial settings (e.g., on the web), in feature - constrained environments (e.g., on a mobile phone), and in a variety of other situations such as under duress. He also explores authentication in a range of devices – from powerful desktop computers and mobile phones to resource constrained embedded devices (e.g., RFIDs or automotive systems). Besides the technical nature of the systems, he is also interested in how the systems interact with users – where they work well together and where they break down – and how the security and privacy of these user - facing systems can be improved.



Selected Topics on Wireless Security and Localization
Kasper Bonne Rasmussen | University of California, Irvine

2013-03-04, 10:30 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

I will cover a couple of my recent contributions to secure localization and distance bounding. Distance bounding protocols have been proposed for many security critical applications as a means of getting an upper bound on the physical distance to a communication partner. I will show some practical examples of problems where distance bounding can provide a unique solution to problems which are otherwise difficult to solve. One such example is in the context of implantable medical devices.

One of the main obstacles for the wider deployment of distance bounding using electromagnetic (radio) waves, is the lack of hardware platforms that implement and support these protocols. I will show the first prototype system that demonstrates that radio distance bounding protocols can be implemented to match the strict requirements on processing time, that these protocols require. Our system implements a radio that is able to receive, process and transmit signals in less than 1ns.

Finally I will present an area where I see a great potential for future work. In both sensing and actuation applications there is a semantic gap between the electrical system and the physical world. In an adversarial setting this gap can be exploited to make a system believe that, e.g., a switch was activated, when in fact it wasn't. there is a plethora application domains that share this problem, from bio-medical sensors and implantable medical devices to factory control systems and security critical infrastructures. Some of these challenges can be solved using a traditional cryptographic approach, and some are highly interdisciplinary, and will best be handled in collaboration with experts from other fields.

Speaker's bio:

Kasper Rasmussen received an MSc in Information Technology and Mathematics from the Technical University of Denmark in 2005. He got his Ph.D. from the Department of Computer Science at ETH Zurich in 2011. During his Ph.D. he worked on various security issues including secure time synchronization and secure localization with a particular focus on distance bounding. At the end of his Ph.D., Kasper Rasmussen received the ETH Medal for an outstanding dissertation, an award given to 8% of finishing Ph.D. students. Kasper Rasmussen is currently working as a postdoctoral researcher at University of California, Irvine. His research interests include system security and security of wireless networks; security of embedded and cyber-physical systems, including smart grid nodes and hand held devices; protocol design and applied cryptography.



Cloud Storage Consistency Explained Through Baseball
Doug Terry | Microsoft Research, Silicon Valley

2013-02-22, 13:00 - 14:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Some cloud storage services, like Windows Azure, replicate data while providing strong consistency to their clients while others, like Amazon, have chosen eventual consistency in order to obtain better performance and availability. A broader class of consistency guarantees can, and perhaps should, be offered to clients that read shared data. During a baseball game, for example, different participants (the scorekeeper, umpire, sportswriter, and so on) benefit from six different consistency guarantees when reading the current score. Eventual consistency is insufficient for most of the participants, but strong consistency is not needed either.

Speaker's bio:

Doug Terry is a Principal Researcher in the Microsoft Research Silicon Valley Lab. His main research interests are in the design and implementation of novel distributed systems. Prior to joining Microsoft, Doug was the founder and CTO of Cogenia and chief scientist of Xerox PARC's Computer Science Laboratory, where he helped pioneer the notion of ubiquitous computing and led a number of research projects on weakly consistent distributed systems. He has published papers on a variety of topics including epidemic algorithms, collaborative filtering, continuous queries, active documents, the Etherphone system, and the Bayou replicated database, and he wrote a Synthesis Lecture on "Replicated Data Management for Mobile Computing." Doug has a Ph.D. in Computer Science from U. C. Berkeley, where he worked on Berkeley UNIX, developed the first version of the BIND DNS server, and occasionally teaches courses. He earned a B.A. in Computer Science from UCSD. He is a member of the ACM Council and a Fellow of the ACM.



Tales from the Jungle
Peter Sewell | University of Cambridge

2013-02-18, 10:30 - 12:00
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We rely on a computational infrastructure that is a densely intertwined mass of software and hardware: programming languages, network protocols, operating systems, and processors. It has accumulated great complexity, from a combination of engineering design decisions, contingent historical choices, and sheer scale, yet it is defined at best by prose specifications, or, all too often, just by the common implementations. Can we do better? More specifically, can we apply rigorous methods to this mainstream infrastructure, taking the accumulated complexity seriously, and if we do, does it help? My colleagues and I have looked at these questions in several contexts: the TCP/IP network protocols with their Sockets API; programming language design, including the Java module system, the C11/C++11 concurrency model, and the C programming language; the hardware concurrency behaviour of x86, IBM POWER, and ARM multiprocessors; and compilation of concurrent code.

In this talk I will draw some lessons from what did and did not succeed, looking especially at the empirical nature of some of the work, at the social process of engagement with the various different communities, and at the mathematical and software tools we used. Domain-specific modelling languages (based on functional programming ideas) and proof assistants were invaluable for working with the large and loose specifications involved: idioms within HOL4 for TCP, our Ott tool for programming language specification, and Owens's Lem tool for portable semantic definitions, with HOL4, Isabelle, and Coq, for the relaxed-memory concurrency semantics work. Our experience with these suggests something of what is needed to make mathematically rigorous engineering of mainstream computer systems (and in systems research) a commonplace reality.

Speaker's bio:

-



Borders of Decidability in Verification of Data-Centric Dynamic Systems
Babak Bagheri | UNIBZ

2013-02-14, 14:00 - 15:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

In this talk I present our recent results on data-aware static verification, in which we select the artifact-centric model as a natural vehicle for our investigation. Artifact-centric systems are models for business process systems in which the dynamics and the manipulation of data are equally central. Given the family of variations on this model found in the literature, for the sake of a uniform terminology we introduce our own pristine formalization, which captures existing artifact-centric dialects, and which is semantically equivalent to the most expressive ones. We call our business process formalism ``Data-Centric Dynamic Systems'' (DCDS). The process is described in terms of atomic actions that evolve the system. Action execution may involve calls to external services, thus inserting fresh data into the system. As a result such systems are infinite-state. We show that verification is undecidable in general, and we isolate notable cases where decidability is achieved. More specifically, we show that in a first-order $\mu$-calculus variant that preserves knowledge of objects appeared along a run, we get decidability under the assumption that the fresh data introduced along a run are bounded, though they might not be bounded in the overall system. Then, we investigate decidability under the assumption that knowledge of objects is preserved only if they are continuously present. We show that if infinitely many values occur in a run but do not accumulate in the same state, then we get again decidability. We give syntactic conditions to avoid this accumulation through the novel notion of ``generate-recall acyclicity'', which ensures that every service call activation generates new values that cannot be accumulated indefinitely. We believe that DCDSs are natural and expressive models for systems powered by an underlying database (i.e., so called data-centric systems), and thus are an ideal vehicle for foundational research with potential to transfer to alternative models.

Speaker's bio:

-



Abstraction for Weakly Consistent Systems
Alexey Gotsman | IMDEA Software Institute, Madrid, Spain

2013-02-14, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

When constructing complex concurrent and distributed systems, abstraction is vital: programmers should be able to reason about system components in terms of abstract specifications that hide the implementation details. Nowadays such components often provide only weak consistency guarantees about the data they manage: in shared-memory systems because of the effects of relaxed memory models, and in distributed systems because of the effects of replication. This makes existing notions of component abstraction inapplicable. In this talk I will describe our ongoing effort to specify consistency guarantees provided by modern shared-memory and distributed systems in a uniform framework and to propose notions of abstraction for components of such systems. I will illustrate our results using the examples of the C/C++ memory model and eventually consistent distributed systems. This is joint work with Mark Batty (University of Cambridge), Sebastian Burckhardt (Microsoft Research), Mike Dodds (University of York) and Hongseok Yang (University of Oxford).

Speaker's bio:

-



In Search of Truth (on the Deep Web)
Divesh Srivastava | AT&T Labs

2013-02-13, 10:30 - 12:00
Kaiserslautern building TU - 48, room 680 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

The Deep Web has enabled the availability of a huge amount of useful information and people have come to rely on it to fulfill their information needs in a variety of domains. We present a recent study on the accuracy of data and the quality of Deep Web sources in two domains where quality is important to people's lives: Stock and Flight. We observe that, even in these domains, the quality of the data is less than ideal, with sources providing conflicting, out-of-date and incomplete data. Sources also copy, reformat and modify data from other sources, making it difficult to discover the truth. We describe techniques proposed in the literature to solve these problems, evaluate their strengths on our data, and identify directions for future work in this area.

Speaker's bio:

Divesh Srivastava is the head of the Database Research Department at AT&T Labs-Research. He received his Ph.D. from the University of Wisconsin, Madison, and his B.Tech from the Indian Institute of Technology, Bombay. He is a Fellow of the ACM, on the board of trustees of the VLDB Endowment, and an associate editor of the ACM Transactions on Database Systems. He has served as the associate Editor-in-Chief of the IEEE Transactions on Knowledge and Data Engineering, and the program committee co-chair of many conferences, including VLDB 2007. He has presented keynote talks at several conferences, including VLDB 2010. His research interests span a variety of topics in data management.



Towards a Secure DNS
Haya Shulman | Bar Ilan University, Israel

2013-02-12, 13:00 - 14:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Most caching DNS resolvers still rely for their security, against poisoning, on validating that the DNS responses contain some ‘unpredictable’ values, copied from the request. These values include the 16 bit identifier field, and other fields, randomised and validated by different ‘patches’ to DNS. We investigate the prominent patches, and show how off-path attackers can circumvent all of them, exposing the resolvers to cache poisoning attacks. We present countermeasures preventing our attacks; however, we believe that our attacks provide additional motivation for adoption of DNSSEC (or other MitM-secure defenses). We then investigate vulnerabilities in DNSSEC configuration among resolvers and zones, which reduce or even nullify the protection offered by DNSSEC. Finally we provide our recommendations and countermeasures to prevent the vulnerabilities.

Speaker's bio:

-



Networking: A Killer App for Programming Languages Researchers
David Walker | Princeton University

2013-01-28, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. Moreover, historically, managing these networks has been hideously complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) and their ad hoc, closed and proprietary configuration interfaces. Software-Defined Networking (SDN) is poised to change this state of affairs by offering a clean, simple and open interface between network devices and the software that controls them. In particular, many commercial switches now support the OpenFlow protocol, and a number of campus, data-center, and backbone networks have deployed the new technology.

However, while SDN makes it possible to program the network, it does not make it easy: The first generation of SDN controllers offered application developers the "assembly language" of network programming platforms. To reach SDN’s full potential, research in programming languages and compilers is desperately needed. In this talk, I discuss our work to date in this area, which revolves around the design of a language, compiler and run-time system for SDN programming. The language, called Frenetic, allows programmers to work declaratively, specifying the behavior of a network at a high level of abstraction. The compiler and run-time system take care of the tedious details of compiling and implementing these high-level policies using the OpenFlow protocol.

A key strength of the Frenetic design is its support for modular programming: Complex network applications can be decomposed in to logical subcomponents --- an access control policy, a load balancer, a traffic monitor --- and coded independently. Frenetic's rich combinator library makes it possible to stitch such components back together to form a fully functioning whole. Frenetic also contains carefully designed operators that help users transition from one global, high-level network policy to the next while preserving key network invariants. Overall, Frenetic's abstractions make it dramatically easier for programmers to write and reason about SDN applications.

Speaker's bio:

-



Scientific Data Management: Not your everyday transaction
Prof. Anastasia Ailamaki | EPFL

2013-01-23, 14:00 - 15:30
Saarbrücken building E1 5, room 002 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Today's scientific processes heavily depend on fast and accurate analysis of experimental data. Scientists are routinely overwhelmed by the effort needed to manage the volumes of data produced either by observing phenomena or by sophisticated simulations. As database systems have proven inefficient, inadequate, or insufficient to meet the needs of scientific applications, the scientific community typically uses special-purpose legacy software. When compared to a general-purpose data management system, however, application-specific systems require more resources to maintain, and in order to achieve acceptable performance they often sacrifice data independence and hinder the reuse of knowledge. With the exponential growth of dataset sizes, data management technology are no longer luxury; they are the sole solution for scientific applications.

I will discuss some of the work from teams around the world and the requirements of their applications, as well as how these translate to challenges for the data management community. As an example I will describe a challenging application on brain simulation data, and its needs; I will then present how we were able to simulate a meaningful percentage of the human brain as well as access arbitrary brain regions fast, independently of increasing data size or density. Finally I will present some of the data management challenges that lie ahead in domain sciences, and will introduce NoDB, a new query processor which explores raw, never-before-seen data in-situ, using full querying power.

Speaker's bio:

-



Building Privacy-Preserving Systems: What Works, What Doesn't, and What Is To Be done
Vitaly Shmatikov | University of Texas, Austin

2013-01-17, 10:00 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Every time you touch a computer, you leave a trace. Sensitive information about you can be found in the remnants of visited websites and voice-over-IP conversations on your machine; in the allegedly "de-identified" data about your purchases, preferences, and social relationships collected by advertisers and marketers; and, in the not-too-distant future, in the video and audio feeds gathered by sensor-based applications on mobile phones, gaming devices, and household robots.

This talk will describe several research systems developed at UT Austin that aim to provide precise privacy guarantees for individual users. These include (1) the Airavat system for differentially private data analysis, (2) the "Eternal Sunshine of the Spotless Machine" system that runs full-system applications in private sessions and then erases all memories of their execution from the host, and (3) "A Scanner Darkly" system that adds a privacy protection layer to popular vision and image processing libraries, preserving the functionality of sensor-based applications but preventing them from collecting raw images of their users.

Speaker's bio:

Vitaly Shmatikov is a faculty member at the University of Texas at Austin, where he works on computer security and privacy. After getting his PhD from Stanford and before joining UT, he worked at SRI on formal methods for analyzing security protocols.



SecGuru - Symbolic Analysis of Network Connectivity Restrictions
Nikolaj Bjorner | Microsoft Research Redmond

2013-01-09, 11:00 - 12:00
Saarbrücken building E1 5, room 002

Abstract:

SecGuru is a tool for automatically validating network connectivity restriction policies in large-scale data centers. The problem solved by SecGuru is acute in data centers offering public cloud services where multiple tenants are hosted in customized isolation boundaries. Our tool supports the following interactions: (1) given a policy and a contract verify that the policy satisfies the contract (2) provide a semantic difference between policies. The former facilitates property checking and the latter facilitates identifying configuration drifts. We identify bit-vector logic as a suitable basis for policy analysis and use the Z3 theorem prover to solve these constraints. We furthermore develop algorithms for compact enumeration of differences for bit-vector constraints. Finally, we evaluate SecGuru on large scale production services where it has been used to identify and fix numerous configuration errors.

Speaker's bio:

Nikolaj Bjorner is a Senior Researcher at Microsoft Research working in the area of Automated Theorem Proving and Software Engineering. His current main line of work is around the state-of-the art theorem prover Z3, which is used as a foundation of many software engineering tools, including test-case generation, smart fuzzing, static analysis, program verification, software model checking, model-based software design and synthesis. Previously, he designed the DFSR, Distributed File System - Replication, shipped with Windows Server since 2005 and before that he worked on distributed file sharing systems at XDegrees (a startup acquired by Microsoft), and program synthesis and transformation systems at the research company Kestrel Institute. Nikolaj received his Master's and Ph.D. degrees in computer science from Stanford University. http://research.microsoft.com/en-us/people/nbjorner/



Reasoning as a First-class Operating System Service
Timothy Roscoe | ETH Zürich

2012-12-07, 11:00 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In this talk, I'll argue for sophisticated automated reasoning capabilities as a first-class OS service. With such a service, one can delegate many OS policy decisions and calculations to a component which is highly flexible, expressive, and dynamic, providing considerable advantages of hard-coding such functionality in C or scripts. Modern operating systems face several engineering challenges: hardware is increasingly complex, increasingly diverse, and evolving rapidly. This, combined with parallel workloads having complex performance interactions with hardware make it hard to build a simple OS kernel which delivers good performance for a variety of platforms and workloads.

As a first step, we decided to tackle this head-on by building a reasoning engine as a first class service (the "System Knowledge Base") in the Barrelfish OS, borrowing ideas from such fields as knowledge representation, constraint satisfaction, logic programming, and optimization. Doing so was not without problems, but we found it highly convenient in a number of widely different application areas - for example PCI programming, process coordination, spatial scheduling, and message routing. I'll discuss several of these, and how the structure of the OS as a whole changes when a facility like the SKB is available Finally, I'll talk about some future directions, in particularly with embedded devices such as SoCs.

Speaker's bio:

Timothy Roscoe is a Professor in the Systems Group of the Computer Science Department at ETH Zurich, the Swiss Federal Institute of Technology. He received a PhD from the Computer Laboratory of the University of Cambridge, where he was a principal designer and builder of the Nemesis operating system, as well as working on the Wanda microkernel and Pandora multimedia system. After three years building web-based collaboration systems at a startup company in North Carolina, Mothy joined Sprint's Advanced Technology Lab in Burlingame, California, working on application hosting platforms and networking monitoring. Mothy joined Intel Research at Berkeley in April 2002 as a principal architect of PlanetLab, an open, shared platform for developing and deploying planetary-scale services. In September 2006 he spent four months as a visiting researcher in the Embedded and Real-Time Operating Systems group at National ICT Australia in Sydney, before joining ETH Zurich in January 2007. His current research interests include operating systems for heterogeneous multicore systems, and network architectures for ubiquitous computing.



Expositor: Scriptable Time-Travel Debugging with First Class Traces
Prof. Michael Hicks | University of Maryland

2012-11-30, 10:30 - 12:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We present Expositor, a new debugging environment that combines scripting and time-travel debugging to allow developers to automate complex debugging tasks. The fundamental abstraction provided by Expositor is the execution trace, which is a time-indexed sequence of program state snapshots. Developers can manipulate traces as if they were simple lists with operations such as map and filter. Under the hood, Expositor efficiently implements traces as lazy, sparse interval trees, whose contents are materialized on demand. Expositor also provides a novel data structure, the edit hash array mapped trie, which is a lazy implementation of sets, maps, multisets, and multimaps that enables developers to maximize the efficiency of their debugging scripts. We have used Expositor to debug a stack overflow and to unravel a subtle data race in Firefox. We believe that Expositor represents an important step forward in improving the technology for diagnosing complex, hard-to-understand bugs. This is joint work with Yit Phang Khoo and Jeff Foster, both at Maryland

Speaker's bio:

Michael Hicks is an associate professor in the Computer Science Department and UMIACS at the University of Maryland, College Park. His primary research interest is to develop and evaluate techniques to improve software reliability and security. Michael is the Director of the Maryland Cybersecurtiy Center (MC2), and with Jeff Foster he directs PLUM, the lab for Programming Languages research at the University of Maryland. Michael received his Ph.D. in Computer and Information Science from the University of Pennsylvania in August 2001, and he spent one year as a post-doctoral associate affiliated with the Information Assurance Institute of the Computer Science Department at Cornell University. During academic 2008 - 2009, he was on sabbatical in Cambridge, England. From September to November he was at Microsoft Research and from December to August 2009 he was at the University of Cambridge Computer Laboratory.



Beluga^mu: Programming proofs in context
Brigitte Pientka | McGill University

2012-11-14, 10:00 - 11:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Software systems should be robust, reliable, and predictable. Today, we routinely reason about the runtime behavior of software using formal systems such as type systems or logics for access control or information flow to establish safety and liveness properties.

In this talk, I will survey Beluga, a dependently typed programming and proof environment. It supports specifying formal systems in the logical framework LF and directly supports common and tricky routines dealing with variables, such as capture-avoiding substitution and renaming. Moreover, Beluga allows embedding LF objects together with their context in programs and types supporting inductive and coinductive definitions. I will discuss how to write inductive and coinductive proofs about LF specifications using three different examples: type uniqueness, normalization by evaluation, and the fact that evaluating a lambda-term cannot yield a value and diverge at the same time. Taken together these examples demonstrate the elegance and conciseness of Beluga for specifying, verifying and validating safety properties.

Speaker's bio:

Brigitte Pientka is an Associate Professor in the School of Computer Science at McGill University, and leading the Computation and Logic group. She received her PhD from Carnegie Mellon University in 2003, and studied previously at the University of Edinburgh and Technical University of Darmstadt



Corybantic: Towards the Modular Composition of SDN Controllers
Jeffrey Mogul | HP Labs, Palo Alto

2012-11-05, 14:00 - 15:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Software-Defined Networking (SDN) promises to enable vigorous innovation, through separation of the control plane from the data plane, and to enable novel forms of network management, through a controller that uses a global view to make globally-valid decisions. The design of SDN controllers creates novel challenges; much previous work has focussed on making them scalable, reliable, and efficient.

We argue that, to control a realistic network, we do not want one monolithic SDN controller. Instead, we want to compose the effects of many controller modules managing different aspects of the network, which may be competing for resources. Each module will try to optimize one or more objectives; we address the challenge of how to coordinate between these modules to optimize an overall objective function. Our framework design, Corybantic, focusses on achieving both modular decomposition and maximizing the overall value delivered by the controller's decisions.

Speaker's bio:

Jeff Mogul is a Fellow at HP Labs, doing research primarily on computer networks and operating systems issues for enterprise and cloud computer systems; previously, he worked at the DEC/Compaq Western Research Lab. He received his PhD from Stanford in 1979, and is an ACM Fellow. Jeff is the author or co-author of several Internet Standards; he contributed extensively to the HTTP/1.1 specification. He was an associate editor of Internetworking: Research and Experience, and has been the chair or co-chair of a variety of conferences and workshops, including SIGCOMM, OSDI, and ANCS. He is currently co-chairing NSDI 2013



Reasoning with MAD distributed systems
Lorenzo Alvisi | University of Texas at Austin

2012-10-31, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Decentralized approaches spanning multiple administrative domains (MAD) are an increasingly effective way to deploy services. Popular examples include peer-to-peer (p2p) applications, content distribution networks, and mesh routing protocols. Cooperation lies at the heart of these services. Yet, when working together is crucial, a natural question is: "What if users stop cooperating?" After all, users in a MAD system are vulnerable not only to the failure of some of the equipment in the system, or to the possibility that some users may behave maliciously, but also to the possibility that users may selfishly refrain from sharing their resources as the protocol would require. In this setting, it is hard to put a bound on the number of components in the systems that deviate from their correct specification. Is it still possible under such circumstances to build systems that not only provide provable guarantees in terms of their safety and liveness properties but also yield practical performance?

Speaker's bio:

Lorenzo Alvisi is a Professor in the Department of Computer Sciences at the University of Texas at Austin and a Visiting Chair Professor at Shanghai Jiao Tong University. Lorenzo holds a Ph.D. (1996) and M.S. (1994) in Computer Science from Cornell University, and a Laurea summa cum laude in Physics from the University of Bologna, Italy. His research interests are in dependable distributed computing. He is a Fellow of the ACM and the recipient of an Alfred P. Sloan Fellowship and an NSF CAREER Award, as well as of several teaching awards. He serves on the editorial boards of the ACM Transactions on Computer Systems (TOCS), ACM Computing Surveys, and Springer's Distributed Computing. In addition to distributed systems, Lorenzo is passionate about western classical music and red Italian motorcycles



Automated Malware Analysis
Dr. Christopher Kruegel | University of California, Santa Barbara

2012-10-01, 11:00 - 12:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Malicious software (malware) is an important threat and root cause of many security problems on the Internet. In this talk, I will discuss our recent efforts on malware analysis, detection, and mitigation. First, I will introduce our infrastructure to collect and analyze malicious code samples. Then, I will present techniques to improve the quality of the results produced by automated, dynamic malware analysis systems. Finally, I will discuss ways in which these results can be leveraged for the detection and mitigation of malicious code.

Speaker's bio:

Christopher Kruegel is an Associate Professor and the holder of the Eugene Aas Chair in the Computer Science Department at the University of California, Santa Barbara. His research interests are computer and communications security, with an emphasis on malware analysis and detection, web security, and intrusion detection. Christopher enjoys to build systems and to make security tools available to the public. He has published more than 90 conference and journal papers. Christopher is a recent recipient of the NSF CAREER Award, the> MIT Technology Review TR35 Award for young innovators, an IBM Faculty Award, and several best paper awards.



Planet Dynamic or: How I Learned to Stop Worrying and Love Reflection
Prof. Jan Vitek | Purdue University

2012-09-25, 15:00 - 16:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

A fundamental belief underlying forty years of programming languages research, aptly captured by the slogan "Well-typed programs can't go wrong", is that programs augmented with machine-checked annotations are more likely to be free of bugs. But of course, real programs do wrong and programmers are voting with their feet. Dynamic languages such as Ruby, Python, Lua, JavaScript and R are unencumbered by redundant type annotations and are increasingly popular. JavaScript, the lingua franca of the web, is moving to the server with the success of Node.js. R, another dynamic language, is being used in statistics, biology and finance for data analysis and visualization. Not only are these languages devoid of types, but they utterly lack any static structure that could be used for program verification. This talk will draw examples from recent results on JavaScript and R to illustrate the extent of the problem and propose some directions for research.

Speaker's bio:

-



Towards Trustworthy Embedded Systems
Gernot Heiser | NICTA, Kensington, Australia

2012-09-24, 13:00 - 14:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 029

Abstract:

Embedded systems are increasingly used in circumstances where people's lives or valuable assets are at stake, hence they should be trustworthy - safe, secure, reliable. True trustworthiness can only be achieved through mathematical proof of the relevant properties. Yet, real-world software systems are far too complex to make their formal verification tractable in the foreseeable future. The Trustworthy Systems project at NICTA has formally proved the functional correctness as well as other security-relevant properties of the seL4 microkernel. This talk will provide an overview of the principles underlying seL4, and the approach taken in its design, implementation and formal verification. It will also discuss on-going activities and our strategy for achieving the ultimate goal of system-wide security guarantees.

Speaker's bio:

-



Opportunistic Mobile Social Networks at Work
Anna-Kaisa Pietilainen | Technicolor Paris Research Laboratory

2012-07-06, 13:30 - 14:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Opportunistic networks exploit human mobility and consequent device-to-device ad hoc contacts to disseminate content in a "store-carry-forward" fashion. In opportunistic networks, disconnections and highly variable delays caused by human mobility are the norm. Another major challenge in opportunistic communications arises from the small form factor of mobile devices which introduces resource limitations compared to static computing systems. Lastly, human mobility and social interactions have a large impact on the structure and performance of opportunistic networks, hence, understanding these phenomena is crucial for the design of efficient algorithms and applications.

In this work, we take an experimental approach to better understand opportunistic mobile social networks. We design and implement of MobiClique, a communication middleware for opportunistic mobile social networking. MobiClique takes advantage of user mobility and social relationships to forward messages in an opportunistic manner. We perform a large-scale MobiClique experiment with 76 people, where we collect social network information (i.e. their Facebook profiles), and ad hoc contact and communication traces. We use the collected data with three other data sets to analyse in detail the time-varying structure and epidemic content dissemination in opportunistic networks. Most of the related works have focused on the pairwise contact history among users in conference or campus environments. We claim that given the density of these networks, this approach leads to a biased understanding of the content dissemination process. We design a methodology to break the contact traces down into "temporal communities", i.e., groups of people who meet periodically during an experiment. We show that these communities correlate with people's social communities. As in previous works, we observe that efficient content dissemination is mostly due to high contact rate users. However, we show that high contact rate users that are more frequently involved in temporal communities contribute less to the dissemination process, leading us to conjecture that social communities tend to limit the efficiency of content dissemination in opportunistic mobile social networks.

Speaker's bio:

-



Provably-Secure Cryptographic Protocols: from Practice to Theory to Practice
Dario Fiore | New York University

2012-07-05, 11:00 - 12:00
Saarbrücken building E1 5, room 029

Abstract:

Digital signatures can be seen as the digital equivalent of handwritten signatures, and are considered one of the most important cryptographic primitives. At a high level, they allow a user Alice to authenticate a digital document by generating a piece of information, that is the signature, using a secret key which is known only by her. Any other user who gets a matching public verification key can check the validity of such signature and thus be convinced that it was generated by Alice. Digital signatures are required to satisfy the most natural security property one could expect: no one, except who knows the secret key, should be able to generate valid signatures. In the quest of mimicking in the digital world what we are used to do in the real world, an interesting question naturally arises: can Alice delegate the signing process (on a restricted set of messages) to third parties without having to reveal to them her secret key? A positive answer to this question has been recently given by Boneh et al. (PKC 2009) by means of homomorphic signatures.

In the first part of my talk, I will present the notion of homomorphic signatures: I will describe important applications which motivate the study of this primitive, and I will survey recent results of mine that propose efficient constructions.

In the second part of the talk I will move the focus to a related, but more general and intriguing question: can we sign computation? Are there means to certify that a program has been run correctly and/or on the correct inputs? These and similar questions are nowadays arising in the context of cloud computing applications in which users want to delegate computation and memory to third parties that are called cloud providers. I will describe relevant security issues emerging from these applications, and will discuss how cryptography can help to solve such problems.

During my presentation I will also mention the usual approaches underlying the process of designing cryptographic protocols, with a particular emphasis on how theory and practice can interact in a significant way.

The talk is mainly based on the following works joint with Dario Catalano, Rosario Gennaro and Bogdan Warinschi. - D. Catalano, D. Fiore and B. Warinschi. Adaptive Pseudo-Free Groups and Applications. EUROCRYPT 2011 - D. Catalano, D. Fiore and B. Warinschi. Efficient Network Coding Signatures in the Standard Model. PKC 2012 - D. Fiore and R. Gennaro. Publicly Verifiable Delegation of Large Polynomials and Matrix Computations, with Applications. Pre-print (May 2012): http://eprint.iacr.org/2012/281

Speaker's bio:

As of 2012, I’m a postdoctoral researcher in the Cryptography Group at the Courant Institute of Mathematical Sciences of the New York University. Before joining NYU I was a postdoc in the Crypto Team at the École Normale Supérieure in Paris. I did my graduate studies at University of Catania where I earned my Ph.D. in Computer Science in March 2010. My advisor was Dario Catalano. I also spent part of my PhD visiting Yevgeniy Dodis at NYU and Rosario Gennaro at IBM Research



Supporting Nested Locking in Multiprocessor Real-Time Systems
Bryan Ward | University of North Carolina, Chapel Hill

2012-07-03, 11:00 - 12:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

This paper presents the first real-time multiprocessor locking protocol that supports fine-grained nested resource requests. This locking protocol relies on a novel technique for ordering the satisfaction of resource requests to ensure a bounded duration of priority inversions for nested requests. This technique can be applied on partitioned, clustered, and globally scheduled systems in which waiting is realized by either spinning or suspending. Furthermore, this technique can be used to construct fine-grained nested locking protocols that are efficient under spin-based, suspension-oblivious or suspension-aware analysis of priority inversions. Locking protocols built upon this technique perform no worse than coarse-grained locking mechanisms, while allowing for increased parallelism in the average case (and, depending upon the task set, better worst-case performance).

Speaker's bio:

-



Provenance for Database Transformations
Val Tannen | University of Pennsylvania and EPFL

2012-06-21, 16:00 - 17:00
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Database transformations (queries, views, mappings) take apart, filter,and recombine source data in order to populate warehouses, materialize views,and provide inputs to analysis tools. As they do so, applications often need to track the relationship between parts and pieces of the sources and parts and pieces of the transformations' output. This relationship is what we call database provenance.

This talk presents an approach to database provenance that relies on two observations. First, provenance is a kind of annotation, and we can develop a general approach to annotation propagation that also covers other applications, for example to uncertainty and access control. In fact, provenance turns out to be the most general kind of such annotation,in a precise and practically useful sense. Second, the propagation of annotation through a broad class of transformations relies on just two operations: one when annotations are jointly used and one when they are used alternatively.This leads to annotations forming a specific algebraic structure, a commutative semiring.

The semiring approach works for annotating tuples, field values and attributes in standard relations, in nested relations (complex values), and for annotating nodes in(unordered) XML. It works for transformations expressed in the positive fragment of relational algebra, nested relational calculus, unordered XQuery, as well as for Datalog, GLAV schema mappings, and tgd constraints. Finally, when properly extended to semimodules it works for queries with aggregates. Specific semirings correspond to earlier approaches to provenance, while others correspond to forms of uncertainty, trust, cost, and access control.

This is joint work with Y. Amsterdamer, D. Deutch, J.N. Foster, T.J. Green, Z. Ives, and G. Karvounarakis, done in part within the frameworks of the Orchestra and pPOD projects.

Speaker's bio:

Val Tannen is a professor in the Department of Computer and Information Science of the University of Pennsylvania. He joined Penn after receiving his PhD from the Massachusetts Institute of Technology in 1987. After working for a time in Programming Languages, his current research interests are in Databases. Moreover, he has always been interested in applications of Logic to Computer Science and since 1994 he has also worked in Bioinformatics, leading a number of interdisciplinary projects. In Databases, he and his students and collaborators have worked on query language design and on models and systems for query optimization, parallel query processing, and data integration. More recently their work has focused on models and systems for data sharing, data provenance and the management of uncertain information.



How Mobile Disrupts and Opens Up Social as We Know It
Monica Lam | Stanford University

2012-06-01, 10:30 - 11:30
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Being a personal device that is constantly online, the mobile device will revolutionize social computing. We will be able to share multimedia and interact freely with any of our different circles of friends, colleagues, and families, without being confined to closed proprietary social networks and without having our communication intermediated by third-parties. We believe this openness will lead to an explosion of social apps in new categories including health, finance, and corporate.

We have started the revolution with the first of such a system called Musubi. Musubi is a real-time, multimedia group chat app that allows users to control all their data. It is also an open app platform that lets apps be easily created and spread virally among friends. Musubi is built on top of an egocentric social platform where identity-based cryptography is used to allow individuals communicate securely with each other without the friction of key exchange. It provides an innovative identity firewall that supports social apps without users giving up friends' data to the app creators.

Musubi Beta and a number of social apps are already available in the Google Play Store. We invite you to give us feedback on the apps and to join us in creating an open mobile social internet.

Speaker's bio:

Monica S. Lam has been a Professor in the Computer Science Department at Stanford University since 1988, and the Faculty Director of the Stanford MobiSocial Computing Laboratory. She received her PhD in Computer Science from Carnegie Mellon University. Her current research interest is in creating open social computing platforms. She has worked in the areas of high-performance computing, computer architecture, compiler optimizations, security analysis, and virtualization-based computer management. She is a co-author of the "Dragon book" and the founding CEO of MokaFive, a desktop virtualization company started by her research group. Monica is an ACM Fellow.



How Mobile Disrupts and Opens Up Social as We Know It
Monica Lam | Stanford University

2012-05-30, 11:30 - 12:15
Saarbrücken building E1 5, room 029 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Unique Identification Authority of India (UIDAI) has been setup by the Govt. of India with a mandate to issue a unique identification number to all the residents in the country. UIDAI proposes to create a platform to first collect the identity details and then to perform authentication that can be used by several government and commercial services providers. A key requirement of the UID system is to minimize/eliminate duplicate identity in order to improve the efficacy of the service delivery. UIDAI has selected biometrics feature set as the primary method to check for duplicate identity. In order to ensure that an individual is uniquely identified in an easy and cost-effective manner, it is necessary to ensure that the captured biometric information is capable of carrying out the de-duplication at the time of collection of information. For government and commercial providers to authenticate the identity at the time of service delivery, it is necessary that the biometric information capture and transmission are standardized across all the partners and users of the UID system.

UID provides, online, cost-effective, ubiquitous authentication services across the country and establishes identity. Authentication is transactional and to be done at the time of availing a benefit.

The objective of the UIDAI is to provide a robust, reusable ID to those who do not have any ID proof, cleaning up existing databases of multiple and fake entries through Uniqueness and to improve targeting and delivery of services. It will also reduce the cost of delivery of services.

Speaker's bio:

Mr. RS Sharma is currently working as Director General & Mission Director of the Unique Identification Authority of India (UIDAI) and is responsible for implementing this ambitious and challenging project undertaken by the Government of India for providing Unique Identification to all its Residents. In his capacity as the DG&MD of this project he is responsible for over-all implementation of this Mission mode project.

Prior to this assignment Mr. Sharma worked with the Government of Jharkhand as Principal Secretary, Departments of Science and Technology and Drinking Water & Sanitation. His previous assignments include Principal Secretary of the Departments of Information Technology (IT), Rural Development and Human Resources Development. As Principal Secretary of the IT Department, Mr. Sharma was responsible for formulation of State policies in the IT and e-Governance areas. He also over-saw the implementation of various e-Governance Projects in all the Departments of the State Government.

Mr. Sharma has held important positions both in the Government of India and State Governments in the past. He has worked in the sectors like Finance, Transport, Treasury, Provident Fund and Water Resources and has been deeply involved in the administrative reforms and leveraging IT to simplify the administrative processes. During his posting in Government of India, he has worked in the Department of Economic Affairs and has dealt with bilateral and multilateral development agencies like World Bank, ADB, MIGA and GEF. He was also in-charge of Financing of Infrastructure projects in the Highways, Ports, Airports and Telecom sectors.

Mr. Sharma's contributions to the IT and e-Governance have been widely recognized both within the country and outside. He has been responsible for implementing a number of Projects relating to ICT Infrastructure, Process Re-engineering and Service Delivery in Public Private Partnership (PPP) mode.

Mr. Sharma holds a Masters Degree in Mathematics from IIT, Kanpur (India) and another Masters in Computer Science from the University of California (USA).



Scalable Machine Learning for the User
Alexander J. Smola | Yahoo! Research & UC Berkeley & ANU

2012-05-14, 11:00 - 12:30
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Scalable content personalization and profiling is a key tool for the internet. In this talk I will illustrate based on three problems how this can be achieved. More specifically I will show how hashing can be used to deal with compactly representing enormous amounts of parameters, how distributed latent variable inference can be used for user profiling, and how session modeling provides an attractive alternative to ranking.

Speaker's bio:

Alex Smola received the Master's degree in Physics at the University of Technology Munich in 1996, and the Doctoral Degree in computer science at the University of Technology Berlin in 1998. Until 1999 he was a researcher at the GMD Institute for Software Engineering and Computer Architecture in Berlin (now part of the Fraunhofer Geselschaft). After that, he worked as a Researcher and Group Leader at the Research School for Information Sciences and Engineering of the Australian National University. From 2004 onwards he worked as a Senior Principal Researcher and Program Leader at the Statistical Machine Learning Program at NICTA. Since 2008, he has been a Principal Research Scientist at Yahoo! Research in Santa Clara, CA, USA.



Credit Networks: Liquidity and Formation
Pranav Dandekar | Stanford University

2012-04-23, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Credit networks represent a way of modeling trust between entities in a network. Nodes in the network print their own currency and trust each other for a certain amount of each other's currency. This allows the network to serve as a decentralized payment infrastructure---arbitrary payments can be routed through the network by passing IOUs along a chain of trusting nodes in their respective currencies---and obviates the need for a common currency. Thus, credit networks are a decentralized approach based on trust to enable interactions between untrusting individuals in internet based networks and markets.

We will first analyze the liquidity, i.e. the ability to route transactions, in credit networks in terms of the long term failure probability of transactions for various network topologies and credit values. We will show that under symmetric transaction rates, the transaction failure probability in a number of credit network topologies is comparable to that in equivalent centralized currency systems; thus we do not lose much liquidity in return for their robustness and decentralized properties. This is based on joint work with Ashish Goel, Ramesh Govindan and Ian Post which appeared in EC'11.

Next we will analyze the formation of credit networks when agents strategically decide how much credit to extend each other under different models of risk. When each agent trusts a fixed set of other agents, and transacts directly only with those it trusts, the formation game is a potential game and all Nash equilibria are social optima. Moreover, the Nash equilibria of this game are equivalent in a very strong sense: the sequences of transactions that can be supported from each equilibrium credit network are identical. However, when we allow transactions over longer paths, the game may not admit a Nash equilibrium, and the price of anarchy is unbounded. When agents have a shared belief about the trustworthiness of each agent, the networks formed in equilibrium have a star-like structure. Though the price of anarchy is unbounded, myopic best response quickly converges to a social optimum. This is based on joint work with Ashish Goel, Michael Wellman and Bryce Wiedenbeck to be presented at WWW'12.

Speaker's bio:

Pranav Dandekar is a PhD candidate in Management Science & Engineering at Stanford. He is broadly interested in algorithmic and economic aspects of online networks and markets. He worked at Amazon in Seattle for 3.5 years before heading back to school to pursue a PhD. He received his B.Eng from SGSITS, Indore, India in 2002 and his MS from the University of Florida, Gainesville in 2004, both in Computer Science.



Energy Debugging in Smartphones
Charlie Hu | Purdue University

2012-04-16, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Despite the incredible market penetration of smartphones and exponential growth of the app market, utility of smartphones has been and will remain severely limited by the battery life. As such, energy has increasingly become the scarcest resource on smartphones that critically affects user experience. In this talk, I will start with a first study that characterizes smartphone energy bugs, or ebugs, broadly defined as errors in the system (apps, OS, hardware, firmware, or external conditions) that result in unexpected smartphone battery drainage and leads to significant user frustrations.

As a first step towards taming ebugs, we built the first fine-grained energy profiler, eprof, that performs energy accounting and hence answers the very question "where was the energy spent in the app?" at the per-routine, per-thread, and per-process granularity. Building eprof in turn requires a fine-grained, online power model which we have developed that captures the unique asynchronous power behavior of modern smartphones. Using eprof, we dissected the energy drain of some of the most popular apps in Android Market and discovered ebugs in popular apps like Facebook.

While essential, eprof only provides a semi-automatic tool for energy debugging. The "holy grail" in energy debugging in smartphones is to develop fully automatic debugging techniques and tools, which can draw synergies from many areas of computer science including OS, PL, compilers, machine learning, HCI, etc. I will present the first automatic ebug detection technique based on static compiler analysis for detecting "no-sleep" energy bugs, the most notorious category of energy bugs found in smartphone apps.

Speaker's bio:

Y. Charlie Hu is a Professor of Electrical and Computer Engineering and Computer Science (by courtesy) and a University Faculty Scholar at Purdue University. He received Ph.D. in Computer Science from Harvard in 1997, and was a postdoc at Rice University working with Willy Zwaenepoel, Peter Druschel, Alan Cox, and a co-founder of the iMimic Networking, Inc. before joining Purdue in 2002. Charlie received the NSF CAREER Award in 2003, the 2009 Early Career Research Award from Purdue College of Engineering, and was named an ACM Distinguished Member in 2010. His research interests lie broadly in distributed systems, operating systems, computer networking, wireless networking, and high performance computing.



SCION: Scalability, Control, and Isolation On Next-Generation Networks
Adrian Perrig | Carnegie Mellon University

2012-03-28, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We present the first Internet architecture designed to provide route control, failure isolation, and explicit trust information for end-to-end communications. SCION separates ASes into groups of independent routing sub-planes, called trust domains, which then interconnect to form complete routes. Trust domains provide natural isolation of routing failures and human misconfiguration, give endpoints strong control for both inbound and outbound traffic, provide meaningful and enforceable trust, and enable scalable routing updates with high path freshness. As a result, our architecture provides strong resilience and security properties as an intrinsic consequence of good design principles, avoiding piecemeal add-on protocols as security patches. Meanwhile, SCION only assumes that a few top-tier ISPs in the trust domain are trusted for providing reliable end-to-end communications, thus achieving a small Trusted Computing Base. Both our security analysis and evaluation results show that SCION naturally prevents numerous attacks and provides a high level of resilience, scalability, control, and isolation.

Speaker's bio:

Adrian Perrig is a Professor in Electrical and Computer Engineering, Engineering and Public Policy, and Computer Science at Carnegie Mellon University. Adrian serves as the technical director for Carnegie Mellon's Cybersecurity Laboratory (CyLab). He earned his Ph.D. degree in Computer Science from Carnegie Mellon University, and spent three years during his Ph.D. degree at the University of California at Berkeley. He received his B.Sc. degree in Computer Engineering from the Swiss Federal Institute of Technology in Lausanne (EPFL). Adrian's research revolves around building secure systems and includes network security, trustworthy computing and security for social networks. More specifically, he is interested in trust establishment, trustworthy code execution in the presence of malware, and how to design secure next-generation networks. More information about his research is available on Adrian's web page. He is a recipient of the NSF CAREER award in 2004, IBM faculty fellowships in 2004 and 2005, the Sloan research fellowship in 2006, the Security 7 award in the category of education by the Information Security Magazine in 2009, and the Benjamin Richard Teare teaching award in 2011



Forecast: Cloudy with a Chance of Consistency
Tim Kraska | UC Berkeley

2012-03-26, 10:30 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room Wartburg, 5th floor

Abstract:

Cloud computing promises virtually unlimited scalability and high availability at low cost. However, it is commonly believed that a system's consistency must be relaxed in order to achieve these properties. We evaluated existing commercial cloud offerings for transactional workloads and found that consistency is indeed expensive and limits scalability in these systems.  Because of these costs, many systems designers have chosen to provide only relaxed consistency guarantees, if any, making such systems inappropriate for many mission-critical applications. This dichotomy is based on unrealistically pessimistic assumptions about Big Data environments. First, it assumes that consistency is an all or nothing decision that must be applied uniformly to all data in a system. Secondly, even in situations where strong consistency is required, previous transaction commit protocols were based on worst-case assumptions regarding the likelihood of conflicts.  In this talk, I will describe two techniques that build on a more nuanced view of consistency requirements and the costs of maintaining them.

I will first describe Consistency Rationing, which builds on inventory holding models used in Operations Research to help classify and manage data based on their consistency requirements. Consistency Rationing exploits the fact that for some data the cost of maintaining consistency outweighs the benefit obtained by avoiding inconsistencies. In the second part of the talk, I will present a new optimistic commit protocol for the wide-area network. For a long time, synchronized wide-area replication was considered to be infeasible with strong consistency. With MDCC, I will show how we can achieve strong consistency with similar response-time guarantees as eventual consistency in the normal operational case. This work was done as part of a larger project around Big Data management. At the end of the talk, I will provide an overview of some of my other projects and give an outline for future work.

Speaker's bio:

Tim Kraska is a PostDoc in the AMPLab, which is part of the Computer Science Division at UC Berkeley. Currently his research focuses on Big Data management in the cloud and hybrid human/machine database systems. Before joining UC Berkeley, Tim Kraska received his PhD from ETH Zurich, where he worked on transaction management and stream processing in the cloud as part of the Systems Group. He received a Swiss National Science Foundation Prospective Researcher Fellowship (2010), a University of Sydney Master of Information Technology Scholarship for outstanding achievement (2005), the University of Sydney Siemens Prize (2005), and a VLDB best demo award (2011).



Turning Sequential into Concurrent
Petr Kuznetsov | TU Berlin/Deutsche Telekom Laboratories

2012-03-22, 15:00 - 16:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

It seems to be generally accepted that designing correct and efficient concurrent software is a sophisticated task that can only be held by experts. A crucial challenge then is to convert sequential code produced by a ``mainstream'' programmer into concurrent one. Various synchronization techniques may be used for this, e.g., locks or transactional memory, but what does it mean for the resulting concurrent implementation to be correct? And which synchronization primitives provide more efficiency at the end?

We introduce a correctness criterion for a transformation that enables the use of a sequential data structure in a concurrent system. We then evaluate the performance of resulting concurrent implementations in terms of the sets of concurrent schedules (interleavings of steps of the sequential code) they accept. Intuitively, this captures the amount of concurrency that a given implementation can stand. This allows us to analyze relative power of seemingly different synchronization techniques, such as various forms of locking and transactional memory

Speaker's bio:

Petr Kuznetsov received his Ph.D. in computer science from EPFL in 2005 (Distributed Programming Lab, Prof. Rachid Guerraoui) and did a postdoc at Max Planck Institute for Software Systems (the group of Prof. Peter Druschel) in 2005-2008. Since 2008, he is a senior research scientist at Deutsche Telekom Laboratories and Technical University of Berlin. His research interests are in foundations of distributed systems, in particular, synchronization and fault-tolerance, transactional memory, complexity and computability bounds in concurrent systems.



Language as Influence(d)
Cristian Danescu-Niculescu-Mizil | Cornell University

2012-03-19, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th Floor

Abstract:

What effect does language have on people, and what effect do people have on language? The answers to these questions can help shape the future of social-media systems by bringing a new understanding of communication and collaboration between users.

I will describe two of my efforts to address these fundamental problems computationally, exploiting very large-scale textual and social data. The first project uncovers previously unexamined contextual biases that people have when determining which opinions to focus on, using Amazon.com helpfulness votes on reviews as a case study to evaluate competing theories from sociology and social psychology. The second project leverages insights from psycho- and socio-linguistics and embeds them into a novel computational framework in order to provide a new understanding of how key aspects of social relations between individuals are embedded in (and can be inferred from) their conversational behavior. In particular, I will discuss how power differentials between interlocutors are subtly revealed by how much one individual immediately echoes the linguistic style of the person they are responding to.

This talk includes joint work with Susan Dumais, Michael Gamon, Jon Kleinberg, Gueorgi Kossinets, Lillian Lee and Bo Pang.

Speaker's bio:

Cristian Danescu-Niculescu-Mizil's research aims at developing computational frameworks that can lead to a better understanding of human social behavior, by unlocking the unprecedented potential of the large amounts of natural language data generated online. His work tackles problems related to conversational behavior, opinion mining, computational semantics and computational advertising. Cristian is a PhD student in computer science at Cornell University. Earlier, he earned a master's degree from Jacobs University Bremen and an undergraduate degree from the University of Bucharest. He is the recipient of a Yahoo! Key Scientific Challenges award and his work has been featured in popular-media outlets such as Nature News and MIT's Technology Review blog.



Information Discovery in Large Complex Datasets
Julia Stoyanovich | University of Pennsylvania

2012-03-15, 10:30 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

The focus of my research is on enabling novel kinds of interaction between the user and the information in a variety of digital environments, ranging from social content sites, to digital libraries, to the Web. In this talk, I will give an overview of my research, and will then present two recent lines of work that focus on information discovery in two important application domains.

In the first part of this talk, I will present an approach for tracking and querying fine-grained provenance in data-intensive workflows. A workflow is an encoding of a sequence of steps that progressively transform data products. Workflows help make experiments reproducible, and may be used to answer questions about data provenance -- the dependencies between input, intermediate, and output data. I will describe a declarative framework that captures fine-grained dependencies, enabling novel kinds of analytic queries, and will demonstrate that careful design and leveraging distributed processing make tracking and querying fine-grained provenance feasible.

In the second part of this talk, I will discuss personalized search and ranking on the Social Web. Social Web users provide information about themselves in stored profiles, register their relationships with other users, and express their preferences with respect to information and products. I will argue that information discovery should account for a user's social context, and will present network-aware search – a novel search paradigm in which result relevance is computed with respect to a user's social network. I will describe efficient algorithms appropriate for this setting, and will show how social similarities between users may be leveraged to make processing more efficient.

Speaker's bio:

Julia Stoyanovich is a Visiting Scholar at the University of Pennsylvania. Julia holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics and Statistics from the University of Massachusetts at Amherst. After receiving her B.S. Julia went on to work for two start-ups and one real company in New York City, where she interacted with, and was puzzled by, a variety of massive datasets. Julia's research focuses on modeling and exploring large datasets in presence of rich semantic and statistical structure. She has recently worked on personalized search and ranking in social content sites, rank-aware clustering in large structured datasets that focus on dating and restaurant reviews, data exploration in repositories of biological objects as diverse as scientific publications, functional genomics experiments and scientific workflows, and representation and inference in large datasets with missing values.



Why and How: A Reverse Perspective on Data Management
Alexandra Meliou | University of Washington

2012-02-27, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

Current trends have seen data grow larger, more intertwined, and more diverse, as more and more users contribute to and use it. This trend has given rise to the need to support richer data analysis tasks. Such tasks involve determining the causes of observations, finding and correcting the sources of error in query results, as well as modifying the data in order to make it conform to complex desirable properties.

In this talk I will discuss three challenges: (a) providing explanations through support for causal queries ("Why"), (b) tracing and correcting errors at their source (post-factum data cleaning), and (c) integrating database systems with constrained optimization capabilities ("How"). First, I will show how to apply causal reasoning to tuple provenance in order to determine the causes of query results, and their responsibility. I will present extensive analysis of the data complexity for the case of conjunctive queries, and focus on a complete dichotomy between NP-hard and PTIME cases for the problem of computing responsibility. This concrete characterization of PTIME cases is crucial in scaling up to the challenges of Big Data. Second, I will demonstrate the applicability of the causality framework in a practical setting. I will use a mobile sensing application to show that ranking provenance tuples by their degrees of responsibility identifies errors more effectively than other schemes. Finally, I will present the Tiresias system, the first how-to query engine, which seamlessly integrates database systems with constrained problem solving capabilities. The contributions of the system are threefold: (a) a declarative interface for defining how-to queries over a database, (b) translation rules from the declarative statements to the constrained problem specification, and (c) a suite of data-specific optimizations that allow scaling to large data sizes. Initial results of our prototype system implementation show order-of-magnitude speedups to state-of-the-art solver runtimes, which indicates that there are significant gains in pushing this functionality within the database engine. I will conclude with a summary of my contributions, and discuss my future steps with the Tiresias system, and the bigger vision of reverse data management.

Speaker's bio:

-



No Free Lunch in Data Privacy
Ashwin Machanavajjhala | Yahoo! Santa Clara, CA

2012-02-20, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

Tremendous amounts of personal data about individuals are being collected and shared online. Legal requirements and an increase in public awareness due to egregious breaches of individual privacy have made data privacy an important field of research. Recent research, culminating in the development of a powerful notion called differential privacy, have transformed this field from a black art into a rigorous mathematical discipline. 

This talk critically analyzes the trade-off between accuracy and privacy in the context of social advertising – recommending people, products or services to users based on their social neighborhood. I will present a theoretical upper bound on the accuracy of performing recommendations that are solely based on a user's social network, for a given level of (differential) privacy of sensitive links in the social graph. I will show using real networks that good private social recommendations are feasible only for a small subset of the users in the social network or for a lenient setting of privacy parameters.

I will also describe some exciting new research about a no free lunch theorem, which argues that privacy tools (including differential privacy) cannot simultaneously guarantee utility as well as privacy for all types of data, and conclude with directions for future research in data privacy and big-data management.

Speaker's bio:

Ashwin Machanavajjhala is a Senior Research Scientist in the Knowledge Management group at Yahoo! Research. His primary research interests lie in data privacy with a specific focus on formally reasoning about privacy under probabilistic adversary models. He is also interested in big-data management and statistical methods for information integration. Ashwin graduated with a Ph.D. from the Department of Computer Science, Cornell University. His thesis work on defining and enforcing privacy was awarded the 2008 ACM SIGMOD Jim Gray Dissertation Award Honorable Mention. He has also received an M.S. from Cornell University and a B.Tech in Computer Science and Engineering from the Indian Institute of Technology, Madras.



Robust replication
Allen Clement | Max Planck Institute for Software Systems

2012-02-13, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room Wartburg 5th floor

Abstract:

The choice between Byzantine and crash fault tolerance is viewed as a fundamental design decision when building fault tolerant systems. We show that this dichotomy is not fundamental, and present a unified model of fault tolerance in which the number of tolerated faults of each type is a configuration choice. Additionally, we observe that a single fault is capable of devastating the performance of existing Byzantine fault tolerant replication systems. We argue that fault tolerant systems should, and can, be designed to perform well even when failures occur. In this talk I will expand on these two insights and describe our experience leveraging them to build a generic fault tolerant replication library that provides flexible fault tolerance and robust performance. We use the library to build a fault tolerant version of the Hadoop Distributed File system.

Speaker's bio:

Allen Clement is a Postdoctoral Researcher at the Max Planck Institute for Software Systems. He received a Ph.D. from the University of Texas at Austin and an A.B. in Computer Science from Princeton University. His research focuses on the challenges of building robust and reliable distributed systems. In particular, he has investigated practical Byzantine fault tolerant replication, systems robust to both Byzantine and selfish behaviors, consistency in geo-replicated environments, and how to leverage the structure of social networks to build Sybil-tolerant systems.



Exploring the Technical and Economic Factors Underlying Internet Scams
Prof. Geoffrey Voelker | University of California, San Diego

2012-01-16, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Today, the large-scale compromise of Internet hosts serves as a platform for supporting a range of criminal activity in the so-called Internet underground economy. In this talk I will quickly survey work that our group has performed over the past decade on the problems posed by these threats, and how our research directions have evolved over time. In the remainder of the talk, I will describe recent work that our group has performed, including the ecosystem of CAPTCHA-solving service providers and an end-to-end analysis of the spam value chain. Using extensive measurements over months of diverse spam data, broad crawling of naming and hosting infrastructures, and product purchases from a wide variety of spam-advertised sites, I'll characterize the relative prospects for anti-spam interventions at multiple levels.

This work is part of a larger effort of the Collaborative Center for Internet Epidemiology and Defenses (CCIED), a joint NSF Cybertrust Center with UCSD and ICSI (http://www.ccied.org), and an ONR MURI collaboration (http://www.sysnet.ucsd.edu/botnets).

Speaker's bio:

Geoffrey M. Voelker is a Professor at the University of California at San Diego. His research interests include operating systems, distributed systems, and computer networks. He received a B.S. degree in Electrical Engineering and Computer Science from the University of California at Berkeley in 1992, and the M.S. and Ph.D. degrees in Computer Science and Engineering from the University of Washington in 1995 and 2000, respectively.



Dissent: Accountable Anonymous Group Communication
Bryan Ford | Yale University

2012-01-04, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The ability to participate anonymously in online communities is widely valued, but existing anonymous communication protocols offer limited security against anonymous abuse, traffic analysis, or denial-of-service attacks. Dissent is a general-purpose anonymous messaging protocol that addresses these limitations in the context of group communication applications, where members of a well-defined group wish to "post" messages anonymously to each other or onto a common "bulletin board" without a message being linkable to a particular group member. Unlike most existing anonymous communication schemes, the Dissent protocol provides anonymity while also holding its members accountable: if a group member deviates from the protocol in attempt to block communication or attack other members anonymously, the protocol ensures that the group can identify and expel the misbehaving member. The key technical idea enabling this combination of anonymity and accountability is to use a verifiable cryptographic shuffle scheme as a setup phase for a dining cryptographers (DC-nets) communication channel. The verifiable shuffle enables the group to create and agree on a random permutation of user identities to pseudonyms, then use that permutation as a logical schedule for subsequent DC-nets communication. The group can use this agreed-upon schedule to identify any member attempting to disrupt communication by deviating from the schedule or jamming the DC-nets channel. Currently working prototypes of Dissent support small groups, but ongoing efforts are extending the protocol to scale to large groups, handle node failure and network churn gracefully, and address intersection attacks against users who maintain long-term pseudonyms

Speaker's bio:

-



An Automata-theoretic Model of Programming Languages
Uday Reddy | University of Birmingham

2011-12-02, 13:00 - 14:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

In this talk, I present a new model of class-based Algol-like programming languages inspired by automata-theoretic concepts.  The model may be seen as a variant of the "object-based" model previously developed in 1993, where objects are described by their observable behaviour in terms of events, and "state-based" models studied by Reynolds, Oles, Tennent and O'Hearn where objects are not explicitly represented.  The idea is to view objects as automata which are described from the outside through their observable behaviour while, internally, their operations are represented as state transformations.  This allows us to to combine both the state-based and event-based views of objects.  I illustrate the efficacy of the model by proving several test equivalences and discuss its connections to the previous models.

Speaker's bio:

-



Towards a Highly Available Internet
Thomas Anderson | University of Washington

2011-10-31, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Internet availability-the ability for any two nodes in the Internet to communicate-is essential to being able to use the Internet for delivery critical applications such as real-time health monitoring and response. Despite massive investment by ISPs worldwide, Internet availability remains poor, with literally hundreds of outages occurring daily, even in North America and Europe. Some have suggested that addressing this problem requires a complete redesign of the Internet, but in this talk I will argue that considerable progress can be made with a small set of backwardly compatible changes to the existing Internet protocols. We take a two-pronged approach. Many outages occur on a fine-grained time scale due to the convergence properties of BGP, the Internet's interdomain routing system. We describe a novel set of additions to BGP that retains its structural properties, but applies lessons from fault tolerant distributed systems research to radically improve its availability. Other outages are longer-lasting and occur due to complex interactions between router failures and router misconfiguration. I will describe some ongoing work to build an automated system to quickly localize and repair these types of problems.

Speaker's bio:

Thomas Anderson is the Robert E. Dinning Professor of Computer Science and Engineering at the University of Washington. His research interests span all aspects of building practical, robust, and efficient computer systems, including distributed systems, operating systems, computer networks, multiprocessors, and security. He is an ACM Fellow, winner of the ACM SIGOPS Mark Weiser Award, winner of the IEEE Bennett Prize, past program chair of SIGCOMM and SOSP, and he has co-authored seventeen award papers. More information about his research is available on Tom Anderson's web page.



JavaScript and V8 -- Functional-ish progamming in the mainstream
Andreas Rossberg | Google

2011-09-02, 10:00 - 11:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

JavaScript arguably is the most widely-used "lambda language": first-class functions play a central role in the language, and they form the basis for its object system. Not only that: every JavaScript programmer uses concepts like higher-order functions or continuation-passing on a daily basis, without ever having heard those terms.

In this talk, I first give a quick intro into the good, the bad, and the unfathomable of JavaScript, for the language geeks. Then I present some of the technology that V8, Google's high-performance JavaScript VM, is using to get performance out of this mess, namely just-in-time compilation, inline caching, type feedback, dynamic optimization and deoptimization.

Speaker's bio:

-



Opportunity is the Mother of Invention - how Delay Tolerant Networking necessitated Data Centric Networking...
Prof. Jon Crowcroft | University of Cambridge, UK

2011-09-01, 11:00 - 12:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room Wartburg, 5th floor

Abstract:

In this talk, I'm going to tell the story of how a group of european researchers arrived at a design for communications software that seems rather well suited to the new "Data Center Networking" paradigm. The tale starts with moving from UCL Cambridge and choosing to learn about ad hoc networks, and then with Intel research lablet trying out a few disruptive ideas and stumbled on the notion for Haggle ("Haggle" comes from the phrase Ad Hoc Google, now really known as Opportunistic Networking) combining results from Grossglauser & Tse's work on capacity of multi-hop networks with Kevin Fall's work on Delay Tolerant Networks. In the process of building various testbeds in the Haggle project (and three complete versions for native Java phones, C# on Windows Mobile, and native Android&iPhone versions), as well as measuring various aspects of human society, we ended up with a system that appears to be rather more general than expected. Most recently, for example, it was used to build a P2P secure, disconnect tolerant version of Dropbox, as well as to track a Flu epidemic.

Speaker's bio:

http://www.cl.cam.ac.uk/~jac22/cv.txt



Mesos: Multiprogramming for Datacenters
Prof. Ion Stoica | University of California, Berkeley

2011-08-23, 13:30 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Today's datacenters need to support a variety of applications, and an even higher variety of dynamically changing workloads. In this talk, I will present Mesos, a platform for sharing commodity clusters between diverse computing frameworks, such as Hadoop and MPI.  Sharing improves cluster utilization and avoids per-framework data replication. To support the diverse requirements of these frameworks, Mesos employs a two-level scheduling mechanism, called resource offers. Mesos decides how many resources to offer each framework, while frameworks decide which resources to accept and which computations to schedule on these resources. To allocate resources across frameworks, Mesos uses Dominant Resource Fairness (DRF). DRF generalizes fair sharing to multiple-resources, provides sharing incentives, and is strategy proof. Our experimental results show that Mesos can achieve near-optimal locality when sharing the cluster among diverse frameworks, can scale up to 50,000 nodes, and is resilient to node failures.

Speaker's bio:

Ion Stoica is an Associate Professor in the EECS Department at University of California at Berkeley, where he does research on cloud computing and networked computer systems. Past work includes the Chord DHT, Dynamic Packet State (DPS), Internet Indirection Infrastructure (i3), declarative networks, replay-debugging, and multi-layer tracing in distributed systems. His current research includes resource management and scheduling for data centers, cluster computing frameworks, and network architectures. He is the recipient of a SIGCOMM Test of Time Award (2011), the 2007 CoNEXT Rising Star Award, a Sloan Foundation Fellowship (2003), a Presidential Early Career Award for Scientists & Engineers (PECASE) (2002), and the 2001 ACM doctoral dissertation award. In 2006, he co-founded Conviva, a startup to commercialize technologies for large scale video distribution.



Predictable Performance for Unpredictable Workloads
Prof. Donald Kossmann | ETH Zurich

2011-08-11, 14:30 - 16:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

This talk presents the design of a novel distributed, database system that was designed to give query response time guarantees, independent of the query and update workload. The system makes use of aggressive sharing of operations (e.g., scans and joins) between concurrent queries and updates. Specifically, this talk gives details of the storage manager (called Crescando) and of the query processor (called SharedDB), both of which can be deployed in a distributed and scalable infrastructure. Furthermore, the talk presents the results of performance experiments with workloads from an airline reservation system.

Speaker's bio:

Donald Kossmann is a professor for Computer Science at ETH Zurich (Switzerland). He received his MS from the University of Karlsruhe and completed his PhD at the Technical University of Aachen. After that, he held positions at the University of Maryland, the IBM Almaden Research Center, the University of Passau, the Technical University of Munich, and the University of Heidelberg. He is an ACM fellow, member of the board of trustees of the VLDB endowment, and was the program committee chair of the ACM SIGMOD Conf., 2009. He is a co-founder of i-TV-T (1998), XQRL Inc. (acquired by BEA in 2002), 28msec Inc. (2006), and Teralytics (2010). His research interests lie in the area of databases and information systems.



Declarative Data-Driven Coordination Through Entanglement
Johannes Gehrke | Cornell University

2011-07-28, 13:30 - 14:30
Saarbrücken building E1 4, room 024 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

There are many web applications that require users to coordinate. Friends want to coordinate travel plans, students want to jointly enroll in the same set of courses, and busy researchers want to coordinate their schedules. These tasks are difficult to program using existing abstractions provided by database systems because they all require some type of coordination between users. This is fundamentally incompatible with isolation in the classical properties of ACID database transactions. In this talk, I will argue that it is time to look beyond isolation, and I will describe ideas that allow users to perform declarative data-driven coordination through entangled queries and transactions. This talk describes joint work with Gabriel Bender, Nitin Gupta, Christoph Koch, Lucja Kot, Milos Nikolic, and Sudip Roy.

Speaker's bio:

Johannes Gehrke is a Professor in the Department of Computer Science at Cornell University and a visiting researcher at the MPI-SWS. Johannes' research interests are in the areas of database systems, data mining, and data privacy. Johannes received an NSF CAREER Award, an Arthur P. Sloan Fellowship, an IBM Faculty Award, a Humboldt Research Award, the 2011 IEEE Computer Society Technical Achievement Award, and an ACM SIGMOD Best Paper Award. He co-authored the undergraduate textbook Database Management Systems (McGrawHill 2002, currently in its third edition), used at universities all over the world. Johannes was Program co-Chair of SIGKDD 2004, VLDB 2007, and ICDE 2012. From 2007 to 2008, he was Chief Scientist at Fast Search and Transfer, a leading enterprise search company.

Supported by the Alexander von Humboldt Foundation



Names, Binding and Computation
Andrew Pitts | University of Cambridge

2011-06-02, 15:00 - 16:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:



Nominal datatypes are a simple extension of the usual notion of algebraic datatypes with facilities for declaring constructors involving names and name-binding. In this talk I want to revisit the topic of higher-order functional programming with nominal datatypes. Destructing name-bindings seems to inevitably involve the use of locally scoped names in one form or another. The original work on this topic (such as Shinwell's Fresh patch for OCaml) used state-based implementations of local scoping. In this talk I will discuss this and two other possible implementations, that have greater degrees of purity.

As far as I am concerned, the goal of this work is to arrive at a language design that (1) can express common patterns of use of bound names in informal algorithms that manipulate syntax with binders, but (2) has good logical properties---for example, that can co-exist with Constructive Type Theory. That goal has yet to be attained.

Speaker's bio:

-



WebBlaze: New Techniques and Tools for Web Security & BitBlaze: Computer Security via Binary Analysis
Dawn Song | UC Berkeley

2011-06-02, 11:00 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room Wartburg, 5th floor

Abstract:

I will present the WebBlaze project, aiming at designing and developing new techniques and tools to improve web security. WebBlaze's new technologies cover a broad range including new architectural solutions for defending against cross-site scripting attacks, tools for detecting and defending against cross-origin JavaScript capability leaks which lead to universal cross-site scripting attacks, and new approaches for secure browser extensions and web advertisements.

I will also give an overview of the BitBlaze project, describing how we build a unified binary program analysis platform and use it to provide novel solutions to computer security problems including automatic vulnerability discovery, automatic generation of vulnerability signatures for defense, and automatic extraction of security models for analysis and verification. I will also describe some ongoing efforts in mobile security. More information about WebBlaze and BitBlaze is available at http://webblaze.cs.berkeley.edu and http://bitblaze.cs.berkeley.edu.

Speaker's bio:

Dawn Song is Associate Professor of Computer Science at UC Berkeley. Prior to joining UC Berkeley, she was an Assistant Professor at Carnegie Mellon University from 2002 to 2007. Her research interest lies in security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, database security, distributed systems security, to applied cryptography. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the IBM Faculty Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, and the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award. She is also the author of multiple award papers in top security conferences, including the best paper award at the USENIX Security Symposium and the highest ranked paper at the IEEE Symposium on Security and Privacy.



Ultrametric Semantics of Reactive Programs: or, How to Prove a GUI Correct
Neel Krishnaswami | Microsoft Research Cambridge

2011-04-29, 13:00 - 14:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In this talk, I describe a denotational model of higher-order functional reactive programming using ultrametric spaces and nonexpansive maps, which provides a natural Cartesian closed generalization of causal stream functions and uarded recursive defnitions. To write programs, I also show how to define a normalizing type theory corresponding to this semantics.

I show how reactive programs written in this language can be implemented efficiently using an imperatively updated dataflow graph (with correctness proof, but not in this talk!), and demonstrate how GUI (graphical user interface) programs look when written in this style.

Speaker's bio:

-



On Inter-Procedural Analysis of Programs with Lists and Data
Cezara Dragoi | LIAFA Université Paris Diderot

2011-04-28, 14:15 - 15:30
Kaiserslautern building Uni Kaiserlautern, building 48, room 680

Abstract:

We address the problem of automatic synthesis of assertions on sequential programs with singly-linked lists containing data over infinite domains such as integers or reals. Our approach is based on an accurate abstract inter-procedural analysis. We define compositional techniques for computing procedure summaries concerning various aspects such as shapes, sizes, and data. Relations between program configurations are represented by graphs where vertices represent list segments without sharing. The data in the these list segments are characterized by constraints in new complex abstract domains. We define an abstract domain whose elements correspond to an expressive class of first order universally quantified formulas and an abstract domain of multisets. Our analysis computes the effect of each procedure in a local manner, by considering only the reachable parts of the heap from its actual parameters. In order to avoid losses of information, we introduce a mechanism based on unfolding/folding operations allowing to strengthen the analysis in the domain of first-order formulas by the analysis in the multisets domains. The same mechanism is used for strengthening the sound (but incomplete) entailment operator of the domain of first-order formulas. We have implemented our techniques in a prototype tool and we have shown that our approach is powerful enough for automatic (1) generation of non-trivial procedure summaries, (2) pre/post- condition reasoning.

Speaker's bio:

-



Statistical Asynchronous Weak Commitment Scheme: A NewPrimitive to Design Statistical Asynchronous Verifiable SecretSharing Scheme
Ashish Choudhury | Indian Statistical Institute, Kolkata

2011-04-18, 15:00 - 16:00
Saarbrücken building E 1 7, room 323

Abstract:

Asynchronous Weak Secret Sharing (AWSS) is a well known primitive for the design of statistical Asynchronous Verifiable Secret Sharing (AVSS) schemes involving n parties. The existing efficient AWSS schemes are based on the idea of sharing a secret using bivariate polynomial and invokes n^2 instances of another well known asynchronous primitive, namely Asynchronous Information Checking Protocol (AICP). In this paper, we propose a substitute for AWSS called asynchronous weak commitment scheme (AWCS) that has weaker requirements in comparison to AWSS. Due to its weaker requirements, AWCS is conceptually much simpler to construct compared to AWSS. In fact, we can design AWCS using simple Shamir secret sharing scheme (based on univariate polynomial), instead of using bivariate polynomials. Moreover, our AWCS invokes only n instances of AICP. Therefore, the existing best known AVSS schemes call for only n^2 instances of AICP when they incorporate our AWCS, as compared to n^3 instances required earlier. This matches the number of instances of ICP (synchronous version of AICP) invoked in the best known statistical VSS schemes in the synchronous settings. We observe that we gain a factor of \Theta(n) in the communication complexity when our AWCS is used in the existing AVSS schemes in place of AWSS. This further saves a factor of \Theta(n) in the communication complexity of the best known existing asynchronous Byzantine agreement (ABA) and asynchronous multiparty computation (AMPC) protocols where AVSS is used as an important stepping stone.

Speaker's bio:

-



Machine-Assisted Concurrent Programming
Martin Vechev | IBM T.J. Watson Research Center in New York

2011-04-18, 11:00 - 12:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor, Wartburg

Abstract:

Virtually all chips today are built with an increasing number of processor cores. To leverage these hardware trends, all future software will have to be concurrent.

The main challenge in developing reliable concurrent software is that a programmer is forced to coordinate a fantastic number of possible interactions. Manual coordination of these interactions (e.g., via locks) has proven to be extremely time consuming, and brittle, often resulting in programs that are incorrect, do not fully utilize the underlying computational resources, or both.

In this talk, I will present new techniques that harness the growing power of modern hardware and the increasing maturity of formal methods to simplify the process of program construction: in essence, given a concurrent program that violates a desired property, the techniques will analyze the (possibly infinite-state) program and attempt to automatically repair it by synthesizing the necessary synchronization.

A tool implementing these techniques has been successfully applied to a variety of challenging problems: from discovering tricky synchronization under weak memory models, to enforcing general atomicity properties, to obtaining new concurrent data structures and memory management algorithms.

Speaker's bio:

Martin Vechev is a Research Staff Member at the IBM T.J. Watson Research Center in New York. His research interests are in software analysis, programming languages, verification, and concurrency. He is interested in developing tools and techniques that improve software quality and programmer productivity. He is the recipient of a Best Paper Award, IBM Research Outstanding Technical Achievement and Extraordinary Accomplishment Awards and a John Atanasoff Award, awarded by the president of Bulgaria. He holds a B.Sc. from Simon Fraser University, Canada, and a Ph.D. from the University of Cambridge, England.



Resilience to Clustering: Analyzing Dynamics in Evolving Networks
Biwas Mitra | CNRS, Paris

2011-04-07, 13:30 - 13:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Understanding the dynamics in large scale networks is a major challenge in front of the network research community. Traditional graph theoretic approaches have their own limitations and are not applicable due to the large size and dynamic nature of the network. In this background, my talk primarily addresses two different issues in technological and social networks. The first half of my talk is directed towards understanding the resilience and emergence of technological networks, specifically superpeer networks. We propose an analytical framework in order to measure the resilience of superpeer networks in face peer churn and attacks. Other side, it is not obvious why bootstrapping of peer nodes and other local dynamics results in the appearance of bimodal degree distribution in superpeer networks like Gnutella. We develop a formalism which enables us to explain the emergence of bimodal network in face of dynamics like peer bootstrapping, churn, link rewiring etc. Further analysis leads us in formulating interesting bootstrapping protocols such that superpeer network evolves with desired topological properties. The second half of my talk mostly focuses towards the detection and analysis of dynamical communities in social networks, specifically in citation network. Most of the recent methods aim at exhibiting community partitions from successive graph snapshots and thereafter connecting or smoothing these partitions using clever time-dependent features and sampling techniques. These approaches are nonetheless achieving "longitudinal" rather than 'dynamic' community detection. Assuming that communities are fundamentally defined by a certain amount of interaction recurrence among a possibly disparate set of nodes over time, we suggest that the loss of information induced by considering successive snapshots makes it difficult to appraise essentially dynamic phenomena. We propose a methodology which aims at tackling this issue in the context of citation datasets, and present several illustrations on both empirical and synthetic dynamic network datasets.

Speaker's bio:

Bivas Mitra is currently working as a postdoctoral researcher at French National Centre for Scientific Research (CNRS), Paris. He received a PhD in Computer Science and Engineering from Indian Institute of Technology, Kharagpur in 2011, after a M.Tech. from IIT Kharagpur in 2003 and B.Tech. from Haldia Institute of Technology, Vidyasagar University in 2001 both in Computer Science and Engineering. From 2003 to 2006, he worked as a lecturer in the department of Computer Science and Engineering at Haldia Institute of Technology. He also worked at Soffront Software (India) Pvt. Ltd. as a Software Engineer in 2001. In his PhD tenure, he received various fellowships like national doctoral fellowship, SAP Labs India doctoral fellowship etc. and several student travel grants to participate in different international conferences. His research interests include complex networks, social networks, peer-to-peer networks, networks modeling, optical networks, wireless internet etc.



Controlling Access to Data: A Logic-Based Approach
Deepak Garg | Carnegie Mellon University

2011-04-04, 10:30 - 11:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Sensitive data in many organizations such as intelligence, healthcare and finance corporations, is often protected by complex access policies that rely on a mix of signed credentials, clock time and system state. Enforcement of such policies through conventional mechanisms like access control lists is administratively infeasible. Motivated by this disparity, this talk presents the theoretical and practical aspects of a logic-based access control subsystem for representing, interpreting, and enforcing access policies. The theoretical underpinning of the subsystem is a new logic to represent access policies, and its proof theory to determine their consequences. By carefully separating policy interpretation, policy decision, and policy enforcement, the subsystem leverages (conventionally inefficient) logical tools to attain very high throughput. The subsystem is evaluated in its implementation in a local file system, and its expressiveness is validated through a case study of policies used in the U.S. intelligence community.

Speaker's bio:

Deepak Garg is a post-doctoral researcher in the Cybersecurity Lab (CyLab) at Carnegie Mellon University. He obtained a Ph.D.  at Carnegie Mellon's Computer Science Department and an undergraduate degree in Computer Science and Engineering from the Indian Institute of Technology, New Delhi. His research interests are in the areas of computer security and privacy, formal logic and programming languages.



Opportunistic Wireless Network Architectures
Rohan Narayana Murty | Harvard University

2011-03-28, 10:30 - 11:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

With wireless networks slated to become the dominant method of internet access of the future, the radio spectrum is fast becoming a scarce and expensive resource. Despite the significant growing pressures on the demand for spectrum, there are large portions of the overall spectrum that are severely under-utilized ultimately leading to inefficient use of available capacity. To address these problems we build opportunistic wireless networks, which work by continually seeking and using portions of the spectrum that are unused by the spectrum owners (incumbents) while ensuring non-interference with the incumbents. A prominent emerging system where opportunistic wireless networking can work well is in the so-called white spaces. Enabled by two historic rulings (in 2008 and 2010) by the Federal Communications Commission (FCC) in the United States, white spaces are those television channels that, in an instant in time, are not used by the incumbents: television stations or wireless microphones.

In this talk I will present the challenges encountered when building the next generation of wireless networks that operate opportunistically over these white spaces., I will first present WhiteFi, which consists of new algorithms and protocols for networking over the white spaces. I will then present SenseLess, a white spaces network that obviates the need for white space devices to sense the presence of incumbents. I will present results and evaluations from prototype implementations and deployments of the two systems.

Speaker's bio:

Rohan Narayana Murty is a doctoral candidate in the Computer Science Department at Harvard University. He received an undergraduate degree in Computer Science from Cornell University in 2005. His research interests span networked systems including networks, mobile computing, and distributed systems. His thesis work has won the best paper at SIGCOMM 2009, the Microsoft Research Graduate Fellowship, a Siebel Scholars Fellowship, and a Jim Gray Seed Grant.



Querying Probabilistic Data
Dan Suciu | University of Washington

2011-03-23, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

A major challenge in data management is how to manage uncertain data. Many reasons for the uncertainty exists: the data may be extracted automatically from text, it may be derived from the physical world such as RFID data, it may be integrated using fuzzy matches, or may be the result of complex stochastic models. Whatever the reason for the uncertainty, a data management system needs to offer predictable performance to queries over such data.

In this talk I will address a fundamental problem in probabilistic databases: given a query, what is the computational complexity of evaluating it over probabilistic databases? Probabilistic inference is known to be hard in general, but once we fix a query, it becomes a specialized problem. I will show that Unions of Conjunctive Queries (also known as non-recursive datalog rules) admit a dichotomy: every query is either provably #P hard, or can be evaluated in PTIME. For practicaly purposes, the most interesting part of this dichotomy is the PTIME algorithm. It uses in a fundamental way the Mobius' inversion formula on finite lattices (which is the inclusion-exclusion formula plus term cancellation), and, because of that, it can perform probabilistic inference in PTIME on classes of Boolean expressions where other established methods fail, including OBDDs, FBDDs, inference based on bounded tree widths, or d-DNNF's.

Speaker's bio:

Dan Suciu is a Professor in Computer Science at the University of Washington. He received his Ph.D. from the University of Pennsylvania in 1995, then was a principal member of the technical staff at AT&T Labs until he joined the University of Washington in 2000. Suciu is conducting research in data management, with an emphasis on topics that arise from sharing data on the Internet, such as management of semistructured and heterogeneous data, data security, and managing data with uncertainties. He is a co-author of the book Data on the Web: from Relations to Semistructured Data and XML, holds twelve US patents, received the 2000 ACM SIGMOD Best Paper Award, the 2010 PODS Ten Year Test of Time Award, is a recipient of the NSF Career Award and of an Alfred P. Sloan Fellowship.



Towards Multicore-Ready real-time operating Systems
Björn Brandenburg | University of North Carolina

2011-03-21, 10:30 - 11:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

With the recent advent of multicore chips, real-time applications are now increasingly being deployed on multiprocessors. The underlying real-time operating systems (RTOSs) must hence adapt to become "multicore-ready". This poses a challenge: which scheduling and synchronization algorithms should be used to maximize RTOS flexibility and efficiency on multicore platforms?

This talk focusses on two relevant results. In the first part, I present an overhead-aware scheduler evaluation methodology and a case study, which highlights that the traditional choice of fixed-priority scheduling is indeed not the best choice for multicore systems. In the second part, I present the first provably optimal multiprocessor real-time locking protocol, which answers a long-open question pertaining to blocking optimality in multiprocessor real-time systems.

Speaker's bio:

-



Towards Multicore-Ready real-time operating Systems
Björn Brandenburg | University of North Carolina

2011-03-17, 10:30 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor Wartburg

Abstract:

Modern society is increasingly dependent on (and fearful of) massive amounts and availability of electronic information. There are numerous everyday scenarios where sensitive data must be -- sometimes reluctantly or suspiciously -- shared between two or more entities without mutual trust. This prompts the need for mechanisms to enable limited (privacy-preserving) sharing of sensitive information. Among them, Private Set Intersection (PSI) techniques are particularly appealing whenever two parties wish to compute an intersection of their respective sets of items without revealing to each other any other information, beyond the intersection. This talk motivates the need for PSI techniques with various features and illustrates several concrete PSI variants that offer appreciably better efficiency than prior work and guarantee stronger privacy properties. Finally, motivated by proliferation of smartphones, and increasing amount of personal information shared ubiquitously, we identify some privacy issues specific to smartphone environments. We present several solutions geared for privacy-enhanced smartphone applications, such as: scheduling, location/interest sharing, and participatory sensing.

Speaker's bio:

Emiliano De Cristofaro is a PhD candidate at the University of California, Irvine (UCI). He has been as a research intern at: NEC Europe Labs Heidelberg (Germany), INRIA Rhone Alpes (France), and Nokia Research Center Lausanne (Switzerland). His research interests include security, privacy, and applied cryptography.



Automated Construction of Machine-Checked Cryptographic Proofs
Santiago Zanella Béguelin | IMDEA Software Institute, Madrid

2011-03-15, 14:00 - 15:00
Saarbrücken building E1 4, room 024

Abstract:

The game-based approach is an established methodology for structuring security proofs of cryptographic schemes. Its essence lies in giving precise mathematical descriptions of the interaction between an adversary and an oracle system---such descriptions are referred to as games---and to organize proofs as sequences of games, starting from a game that represents a security goal, and proceeding by successive transformations to games that represent security assumptions. Game-based proofs can be rigorously formalized by representing games as probabilistic programs and relying on programming language techniques to justify proof steps. In this talk I will describe two tools that follow this approach: CertiCrypt and EasyCrypt.

* CertiCrypt is built upon the general-purpose proof assistant Coq and provides certified mechanisms to reason about probabilistic programs, including a relational Hoare logic, a theory of observational equivalence, verified program transformations, and ad-hoc techniques, such as reasoning about failure events. CertiCrypt has been notably applied to verify security proofs of OAEP and FDH.

* EasyCrypt is an automated tool written in OCaml that builds upon SMT solvers to synthesize machine-checked proofs from proof sketches, which include the full sequence of games, relational logic judgments and claims relating the probability of events in successive games. Relational judgments are proved using a verification condition generator that yields proof obligations that can be discharged by SMT solvers. To verify claims about probability, we implement a mechanism that combines some elementary rules to directly compute bounds on the probability of events with rules to derive probability (in)equalities from relational judgments. Most components of EasyCrypt are proof-producing, so that proofs built by EasyCrypt can be exported to CertiCrypt and verified using Coq, assuming proof obligations discharged by SMT solvers are valid. EasyCrypt has been notably used to verify the security of the Cramer-Shoup cryptosystem.

If time permits, the talk will feature a demo of the use of EasyCrypt to build a proof, export it to CertiCrypt and verify it using Coq.

Speaker's bio:

Santiago Zanella Béguelin obtained his degree in Computer Science from Universidad Nacional de Rosario, Argentina in 2006, and his Ph.D. under the supervision of Gilles Barthe on the formal certification of game-based cryptographic proofs from Ecole Nationale Supérieure des Mines de Paris in 2010. He is currently a Postdoctoral Research Fellow at the IMDEA Software Institute, Madrid, Spain



Commitment and Coordination in Open Source Production: Studies in Wikipedia
Prof. Robert Kraut | Carnegie Mellon University, Pittsburgh, PA

2011-03-08, 11:00 - 12:00
Kaiserslautern building Uni Kaiserlautern, room Rotunda in bldg.57

Abstract:

Online production communities are increasingly important, creating commercially valuable software (e.g., Linux), generating scientific data (e.g., galaxyzoo.org) and building history's largest encyclopedia (Wikipedia). Motivating and coordinating the volunteers who do this work is a serious problem for many communities. This talk will review empirical research, primarily based on data from Wikipedia, that examines some of the interpersonal and managerial tactics that online production communities use to socialize new community members, to coordinate the work of volunteers and to motivate them. Our research indicates, for example, that broad-based participation is primarily valuable when a subset of the volunteers does the lion's share of the work, but is less valuable when work is distributed more evenly among them. It suggests that group goals help both motivate and socialize participants. It identifies the types of interactions between new community members and old-timers that foster commitment and continued participation. The talk will also review the CrowdForge system for coordinating the micro-contributions of Amazon Mechanical Turk workers, so that they can accomplish complex and highly interdependent projects.

Speaker's bio:

Robert Kraut is Herbert A. Simon Professor of Human-Computer Interaction at Carnegie Mellon University. He received his Ph.D. in Social Psychology from Yale University in 1973, and has previously taught at the University of Pennsylvania and Cornell University. He was a research scientist and manager at AT&T Bell Laboratories and Bell Communications Research for twelve years. Dr. Kraut has broad interests in the design and social impact of computing and computer-mediated communication. He conducts research on everyday use of the Internet, technology and conversation, collaboration in small work groups, computing in organizations and contributions to online communities. He is the lead author on a new book from MIT Press on Evidence-Based Social Design: Using the Social Sciences as the Basis for Building Online Communities. He has served and chaired National Research Council committees on technology and work in small groups. More information about Professor Kraut is available at http://www.cs.cmu.edu/~kraut.



Software Synthesis using Automated Reasoning
Ruzica Piskac | EPFL

2011-03-03, 10:00 - 11:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

Software synthesis is a technique for automatically generating code from a given specification. The goal of software synthesis is to make software development easier while increasing both the productivity of the programmer and the correctness of the produced code. In this talk I will present an approach to synthesis that relies on the use of automated reasoning and decision procedures. First I will describe how to generalize decision procedures into predictable and complete synthesis procedures. Here completeness means that the procedure is guaranteed to find code that satisfies the given specification. I will illustrate the process of turning a decision procedure into a synthesis procedure using linear integer arithmetic as an example. Next I will outline a synthesis procedure for specifications given in the form of type constraints. The procedure takes into account polymorphic type constraints as well as code behavior. The procedure derives code snippets that use given library functions. I will conclude with an outlook on possible future research directions and applications of synthesis procedures. I believe that in the future we can make programming easier and more reliable by combining program analysis, software synthesis, and automated reasoning.

Speaker's bio:

Ruzica Piskac is a PhD candidate at EPFL, working under the supervision of Viktor Kuncak. Her primary research interests are program verification and software synthesis based on automated reasoning. She holds a Master's degree in Computer Science, obtained at the Max-Planck Institute for Computer Science in Saarbruecken, Germany. Her Master's thesis advisor was Harald Ganzinger and the topic of her thesis was formal verification of a priority queue checker using first-order theorem provers. She also holds a Dipl.-Ing. degree in mathematics from University of Zagreb, Croatia. Prior to her PhD studies at EPFL, Ruzica worked at the Digital Enterprise Research Institute in Innsbruck, where she was involved in several EU-funded projects on large-scale automated reasoning to support intelligent data access on the web (LarCK, SEKT, RW2). Ruzica received a Google Anita Borg memorial scholarship in 2010.



Accessibility and Beyond:Addressing the Technology Needs and Wants of Older Adults
Karyn Moffatt | University of Toronto

2011-02-28, 10:00 - 11:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building MPI-SWS Wartburg, room 5th floor

Abstract:

Abstract: Older adults are quickly becoming diverse and savvy users of a broad range of technologies. Of adults age 65 and over, 38% are currently online, and of these, one in four has adopted social media. Though uptake remains low compared to that of younger generations, it is growing dramatically; for example, social networking use among internet users 65 and older doubled over the past year. These trends are encouraging because computer technologies offer immense potential to support individuals as they age - by compensating for cognitive and sensory impairments, by supporting independent living, and by promoting social interaction.

In this talk, I will first give an overview of my dissertation work on increasing the accessibility of pen-based interaction for older adults. Pen-based devices are an appealing platform for older adults because they allow users to take full advantage of their hand-eye coordination skills in a familiar form of interaction. However, research has chiefly focused on the accessibility limitations of the mouse as it has historically garnered more wide-spread adoption. As we move beyond accessibility, we can begin to explore the ways in which technology can be designed to further enrich lives and fulfill unmet needs. Within that theme, I will present ongoing projects aimed at better understanding the technological needs of older adults, and at envisioning new technologies specifically targeted to those needs.

Speaker's bio:

Karyn Moffatt is the associate director of the Technologies for Aging Gracefully Lab (TAGLab) at the University of Toronto, a Natural Sciences and Engineering Research Council (NSERC) Postdoctoral Fellow, and a Canadian Institutes of Health Research (CIHR) Fellow in Health Care, Technology, and Place (HCTP). Her research explores the ways in which technology can be employed to meet human needs and enable older individuals to overcome everyday challenges and obstacles. This work has led to a number of publications in top-tier academic venues and has been recognized with awards at ACM ASSETS 2007 and ACM CHI 2009. Karyn received her PhD in Computer Science in 2010 from the University of British Columbia, where she worked with Professor Joanna McGrenere on methods for increasing the accessibility of pen-based interaction for older adults.



Designing systems that are secure and usable
Prof. M. Angela Sasse | University College London, UK

2011-01-12, 13:30 - 15:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

The number of systems and services that people interact with has increased rapidly over the past 20 years. Most of those systems and services have security controls, but until recently, the usability of those mechanims was not considered. Research over the past 15 years has provide ample evidence that systems that are not usable are not secure, either, because users make mistakes or devise workarounds that create vulnerabilities. In this talk, I will present an overview of the most pressing problems, and what research on usable security (HCISec) has produced in response to this challenge. I will argue that past attempts have been focussed on improving user interfaces to security mechanisms, but that delivering systems with usable and effective security controls requires a change in how we design and implement security in systems and services. The talk will present examples of new approaches to requirements capture and system design, and new approaches to 'security thinking' in organisations.

Speaker's bio:

M. Angela Sasse is the Professor of Human-Centred Technology and Head of Information Security Research in the Department of Computer Science at University College London, UK. A usability researcher by training, she started investigating the causes and effects of usability issues with security mechanisms in 1996. In addition to studying specific mechanisms such as passwords, biometrics, and access control, her research group has developed human-centred frameworks that explain the role of security, privacy, identity and trust in human interactions with technology. A list of project and publications can be found at http://sec.cs.ucl.ac.uk/people/m_angela_sasse/



The Hadoop++ Project
Jens Dittrich | Fachrichtung Informatik - Saarbruecken

2010-12-10, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The Hadoop++ Project

MapReduce is a computing paradigm that has gained a lot of attention in recent years from industry and research. Unlike parallel DBMSs, MapReduce allows non-expert users to run complex analytical tasks over very large data sets on very large clusters and clouds. However, this comes at a price: MapReduce processes tasks in a scan-oriented fashion. Hence, the performance of Hadoop --- an open-source implementation of MapReduce --- often does not match the one of a well-configured parallel DBMS. We propose a new type of system named Hadoop++: it boosts task performance without changing the Hadoop framework at all. To reach this goal, rather than changing a working system (Hadoop), we inject our technology at the right places through UDFs only and affect Hadoop from inside. This has three important consequences: First, Hadoop++ significantly outperforms Hadoop. Second, any future changes of Hadoop may directly be used with Hadoop++ without rewriting any glue code. Third, Hadoop++ does not need to change the Hadoop interface. Our experiments show the superiority of Hadoop++ over both Hadoop and HadoopDB for tasks related to indexing and join processing. In this talk I will present results from a VLDB 2010 paper as well as more recent work.

Link: http://infosys.cs.uni-saarland.de/hadoop++.php

Speaker's bio:

-



Logic, Policy, and Federation in the Cloud
Yuri Gurevich | Microsoft Research, Redmond

2010-11-29, 11:00 - 12:00
Saarbrücken building E1 4, room 024 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:



Imagine that you manage a public cloud. You want to attract lucrative customers but they worry that their data will not be secure in your cloud. Of course they can encode their data before putting it in the cloud and decode it upon removal but that doesn't buy much for them (or for you because your cloud is used just as a glorified blob store). How can you add value? Cryptographers have many tricks but few of them are feasible at this point; most notably, searching on encrypted data with a single keyword is being considered. But maybe we shouldn't reinvent the wheel. How do enterprises interact in real world? Consider commerce for example. Buyers and sellers from very different (in geography, culture, political system) countries succeed in making mutually beneficial deals. The sellers get paid, and the buyers get their goods. How does it work? Well, there is an involved support system that developed from centuries of experience: banks issuing letters of credit, insurance companies that underwrite the transactions and transportation, etc. And numerous policies are enforced. Similarly, there is an involved support system that allows Big Pharma to conduct clinical trials that straddle multiple countries. And so on. Can we lift such support systems to the cloud scale and make them more efficient in the process? We believe that the answer is YES. An important ingredient of the desired solution is a high-level language for writing policies. As we mentioned above, numerous policies need to be enforced. They also need to be stated formally to allow automation, and they need to be high-level to allow comprehension and reasoning. Cryptography is indispensible in enforcing policies but first we need a language to formulate policies succinctly and to exchange them among autonomous parties. The Distributed Knowledge Authorization Language (DKAL) was created for such purposes. It required foundational logic investigation, and it is in the process of tech transfer. This lecture is a popular introduction to DKAL and its applications to doing business via public clouds.

Speaker's bio:

Yuri Gurevich is a Principal Researcher at Microsoft Research in Redmond, WA. He is also Prof. Emeritus at the University of Michigan, an ACM Fellow, a Guggenheim Fellow, a member of Academia Europaea, and Dr. Honoris Causa of Belgian and Russian universities



SMOOTHIE: Scalable transaction processing in the cloud
Nitin Gupta | Cornell University

2010-11-23, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In this talk, I will introduce SMOOTHIE, a new framework for transaction processing. SMOOTHIE is based on optimistic concurrency control separated into a control flow of scalable components that can each scale. In addition, each of these components is nearly stateless and thus can be easily scaled up or down on-demand. I will present the architecture of SMOOTHIE, explain how it achieves scalability, present thoughts on how to avoid bottlenecks through heat-based data placement, and discuss tradeoffs between different implementations of the components of SMOOTHIE.

Speaker's bio:

Nitin Gupta (http://www.cs.cornell.edu/~niting/ is one of Johannes' PhD students. His research lies in the design of semantics for data-driven systems



Entangled queries: an abstraction for declarative data-driven coordination.
Lucja Kot | Cornell University

2010-11-12, 11:00 - 12:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

Today's Web 2.0 users increasingly want to perform data-driven tasks that involve collaboration and coordination. Friends want to make joint travel plans, students want to enroll in courses together, and busy professionals want to coordinate their schedules. These tasks are surprisingly difficult to support using existing database system abstractions, as the coordination required is in direct conflict with isolation -- a key property of ACID transactions.

In the Youtopia project at Cornell, we believe that data-driven coordination is so pervasive that it deserves dedicated support through a clean, declarative abstraction. It is time to move beyond isolation and support declarative, data-driven coordination (D3C) as a fundamental mode of data management. In this talk, I will introduce entangled queries - a simple yet powerful abstraction and mechanism to enable D3C. These queries allow users to specify the coordination required, such as "I want to travel on the same flight as my friend". At runtime, the system performs the coordination ensuring the specifications are met. I will discuss the syntax, semantics and evaluation of these queries, as well as introducing a range of broader research challenges associated with designing for D3C.

Speaker's bio:

Lucja Kot received her PhD in Computer Science at Cornell University and is now a postdoctoral associate at Cornell. Her research interests are in collaborative data management and data integration, as well as database theory. She has worked on cooperative update exchange, XML constraints and static analysis. She has also spent time at Google developing solutions for Deep Web information extraction.



Practical memory safety for C
Periklis Akritidis | University of Cambridge

2010-09-06, 11:00 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

C facilitates high performance execution and low-level systems programming, but the lack of memory safety undermines security and reliability; for example, memory corruption bugs can breach security, and faults in kernel extensions can bring down the entire operating system. Memory safe languages, however, are unlikely to displace C in the near future, and solutions currently in use offer inadequate protection. Comprehensive proposals, on the other hand, are either too slow for practical use, or break backwards compatibility by requiring source code porting or generating incompatible binary code. My talk will present backwards-compatible solutions to prevent dangerous memory corruption in C programs at a low cost.

Speaker's bio:

Periklis Akritidis is a PhD candidate in the Computer Lab at the University of Cambridge, UK. His research interests include systems and computer and network security. His advisor is Dr. Steven Hand.



Effective scheduling techniques for high-level parallel-programming languages
Mike Rainey | University of Chicago

2010-08-17, 11:00 - 12:00
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

In the not-so-distant past, parallel programming was mostly the  concern of programmers specializing in high-performance  computing. Nowadays, on the other hand, many of today's desktop and  laptop computers come equipped with a species of shared-memory  multiprocessor called a multicore processor, making parallel  programming a concern for a much broader range of  programmers. High-level parallel languages, such as Parallel ML (PML)  and Haskell, seek to reduce the complexity of programming multicore  processors by giving programmers abstract execution models, such as  implicit threading, where programmers annotate their programs to  suggest the parallel decomposition. Implicitly-threaded programs,  however, do not specify the actual decomposition of computations or  mapping from computations to processors. The annotations act simply as  hints that can be ignored and safely replaced with sequential  counterparts. The parallel decomposition itself is the responsibility  of the language implementation and, more specifically, of the  scheduling system.

 Threads can take arbitrarily different amounts of time to execute, and  these times are difficult to predict.  Implicit threading encourages  the programmer to divide the program into threads that are as small as  possible because doing so increases the flexibility the scheduler in  its duty to distribute work evenly across processors.  The downside of  such fine-grain parallelism is that if the total scheduling cost is  too large, then parallelism is not worthwhile. This problem is the  focus of this talk.

 The starting point of this talk is work stealing, a scheduling policy  well known for its scalable parallel performance, and the work-first  principle, which serves as a guide for building efficient  implementations of work stealing.  In this talk, I present two  techniques, Lazy Promotion and Lazy Tree Splitting, for implementing  work stealing. Both techniques derive their efficiency from adhering  to the work-first principle. Lazy Promotion is a strategy that  improves the performance, in terms of execution time, of a  work-stealing scheduler by reducing the amount of load the scheduler  places on the garbage collector. Lazy Tree Splitting is a technique  for automatically scheduling the execution of parallel operations over  trees to yield scalable performance and eliminate the need for  per-application tuning. I use Manticore, PML's compiler and runtime  system, and a sixteen-core NUMA machine as a testbed for these  techniques.

 In addition, I present two empirical studies.  In the first study, I  evaluate Lazy Promotion over six PML benchmarks.  The results  demonstrate that Lazy Promotion either outperforms or performs the  same as an alternative scheme based on Eager Promotion.  This study  also evaluates the design of the Manticore runtime system, in  particular, the split-heap memory manager, by comparing the system to  an alternative system based on a unified-heap memory manager, and  showing that the unified version has limited scalability due to poor  locality.  In the second study, I evaluate Lazy Tree Splitting over  seven PML benchmarks by comparing Lazy Tree Splitting to its  alternative, Eager Tree Splitting. The results show that, although the  two techniques offer similar scalability, only Lazy Tree Splitting is  suitable for building an effective language implementation.

Speaker's bio:

Michael Rainey received a BSc in Computer Science and a BSc in  Cognitive Science from Indiana University in 2004 and a MSc in  Computer Science from University of Chicago in 2007.  He expects to  receive a PhD in Computer Science from University of Chicago in the  Summer of 2010.



Solvers for Software Reliability and Security
Vijay Ganesh | MIT

2010-07-22, 13:00 - 14:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

The task of building reliable and secure software remains one of the most important and challenging issues in computer science. In recent years, there has been rapid progress in the scalability and effectiveness of software reliability tools. A key reason for this success is the dramatic improvement in the speed of constraint solvers over the last decade. Constraint solvers are essential components of most software reliability tools, whether they are based on formal methods, program analysis, testing or synthesis. My research on constraint solvers has directly contributed to this trend of increasing solver efficiency and expressive power, thus advancing the state-of-the-art in software reliability research.

In this talk, I will present two solvers that I have designed and implemented, namely, STP and HAMPI. I will talk about the techniques that enable STP and HAMPI to scale, and also some theoretical results. I will also talk about the contexts and applications where each solver is best suited.

STP is a solver for the theory of bit-vectors and arrays. STP was one of the first constraint solvers to enable an exciting new testing technique called Dynamic Systematic Testing (aka Concolic Testing). STP-enabled concolic testers have been used to find hundreds of previously unknown bugs in Unix utilities, OS kernels, media players, and commercial software, some with approximately a million lines of code.

Next, I will describe HAMPI, a solver for a rich theory of equality over bounded string variables, bounded regular expressions, and context-free grammars. Constraints in this theory are generated by analysis of string-manipulating programs. HAMPI has been used to find many unknown SQL injection vulnerabilities in applications with more than 100,000 lines of PHP code using static and dynamic analysis.

Finally, I will conclude my talk with two future research programs. First, I will discuss how faster solvers can enable qualitatively novel approaches to software reliability. Second, I will discuss how we can go from specific solver techniques to solver design paradigms for rich logics.

Speaker's bio:

Dr. Vijay Ganesh is a Research Scientist at MIT since 2007. He completed his PhD in computer science from Stanford University in September 2007. He also has an MS in computer science from Stanford University, and a Bachelor of Technology degree from College of Engineering, Trivandrum, India.

His primary research interests are constraint solvers (SAT/SMT solvers), and their applications to software reliability, computer security and biology. He works on both the theory and practice of constraint solvers. He has designed and implemented several constraint solvers, most notably, STP and HAMPI. STP was one of the first solvers to enable an exciting new testing technique called systematic dynamic testing (or concolic testing). STP has been used in more than 100 research projects relating to software reliability and computer security. More recently he designed another solver, HAMPI, aimed at solving string constraints generated by the analysis of PHP, JavaScript and Perl programs. His paper on HAMPI won the ACM Distinguished Paper Award (2009). STP was the co-winner of the SMTCOMP competition for bit-vector solvers in 2006. Dr. Ganesh has also done research in automated software testing, in particular, whitebox fuzzing.



Timing Analysis of Mixed Time / Event-Triggered Multi-Mode Systems
Linh Phan | University of Pensylvania

2010-07-19, 13:00 - 14:30
Kaiserslautern building G26, room 206 / simultaneous videocast to Saarbrücken building E1 5, room 5th floor

Abstract:

Many embedded systems operate in multiple modes, where mode switches can be both time- as well as event-triggered. While timing and schedulability analysis of the system while it is operating in a single mode has been well studied, it is always difficult to piece together the results from different modes in order to deduce the timing properties of a multi-mode system. In this talk, I will present a model and associated analysis techniques to describe embedded systems that process multiple bursty/complex event/data streams and in which mode changes are both time- and event-triggered. Compared to previous studies, our model is very general and can capture a wide variety of real-life systems. Our analysis techniques can be used to determine different performance metrics, such as the maximum fill-levels of different buffers and the delays suffered by the streams being processed by the system. The main novelty in our analysis lies in how we piece together results from the different modes in order to obtain performance metrics for the full system.

Speaker's bio:

-



Approximate Inference Algorithms in Markov Random Field with Their Applications
Kyomin Jung | KAIST

2010-07-19, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Markov Random Field (MRF) provides an elegant and succinct abstraction to capture inter-dependency between a large number of random variables - applications abound in communication networks, signal processing, statistical physics, combinatorics, biology, etc. In most of these applications, the key task pertains inferring the most likely assignment (aka MAP). This problem is computationally hard in general. Therefore, various approximations have been developed that utilize graph structure of the MRF for efficient computation. Popular approaches like Belief Propagation (and its variants) work well when the underlying graph has large girth, eg. sparse graphical codes like LDPC codes.

In many applications of interest, graphs do have lots of short cycles, but they naturally posses some ``geometry''. We develop a new class of approximation algorithms that utilize geometry, which is called polynomially growing, of the underlying graph to obtain efficient approximation with provable guarantee for the inference problem.

In this talk, I will describe the main idea of the Belief Propagation, and our new algorithm based on simple local updates. I will describe their applications to wireless network and image processing.

Speaker's bio:

Kyomin Jung is an assistant professor at KAIST CS department. He has joint appointments in KAIST EE and Math departments. He received his Ph.D. at MIT Mathematics in 2009. His main research interest includes graphical models, complex network modeling and analysis, and machine learning. During summers of his Ph.D., he worked at Microsoft Research Cambridge (2008), IBM Watson Research (2007), Bell Labs Murray Hill (2006), and Samsung Advanced Institute of Technology (2005) respectively as research internships. He received his B.Sc. at Seoul National Univ. Mathematics in 2003, and he won a gold medal in IMO(International Mathematical Olympiad) 1995.



Spoken Networks: Analyzing face-to-face conversations and how they shape our social connections
Tanzeem Choudhury | Dartmouth College

2010-06-25, 13:30 - 14:30
Kaiserslautern building G26, room 206

Abstract:

With the proliferation of sensor-rich mobile devices, it is now possible to collect data that continuously capture the real-world social interactions of entire groups of people. These new data sets provide opportunities to study the social networks of people as they are observed "in the wild." However, traditional methods often model social networks as static and binary, which are inadequate for continuous behavioral data. Networks derived from behavioral data are almost always temporal, are often non-stationary, and have finer grained observations about interactions as opposed to simple binary indicators. Thus, new techniques are needed that can take into account the variable tie intensities and the dynamics of a network as it evolves in time. In this talk, I will provide an overview of the computational framework we have developed for modeling the micro-level dynamics of human interactions as well as the macro-level network structure and its dynamics from local, noisy sensor observations. Furthermore, by studying the micro and macro levels simultaneously we are able to link dyad-level interaction dynamics (local behavior) to network-level prominence (a global property). I will conclude by providing some specific examples of how the methods we have developed can be applied more broadly to better understand and enhance the lives and health of people.

Based on joint work with Danny Wyatt (University of Washington), James Kitts (Columbia), Jeff Bilmes (University of Washington), Andrew Campbell (Dartmouth), and Ethan Berke (Dartmouth Medical School)

Speaker's bio:

Tanzeem Choudhury is an assistant professor in the computer science department at Dartmouth. She joined Dartmouth in 2008 after four years at Intel Research Seattle. She received her PhD from the Media Laboratory at MIT. Tanzeem develops systems that can reason about human activities, interactions, and social networks in everyday environments. Tanzeem’s doctoral thesis demonstrated for the first time the feasibility of using wearable sensors to capture and model social networks automatically, on the basis of face-to-face conversations. MIT Technology Review recognized her as one of the top 35 innovators under the age of 35 (2008 TR35) for her work in this area. Tanzeem has also been selected as a TED Fellow and is a recipient of the NSF CAREER award. More information can be found at Tanzeem's webpage: http://www.cs.dartmouth.edu/~tanzeem



Privacy and forensics in federated distributed systems
Andreas Haeberlen | University of Pennsylvania

2010-06-15, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In this talk, I will describe two of my ongoing projects, one on collaborative security and the other on secure network provenance.

Collaborative security is an approach to fighting security threats that span many different administrative domains. For example, it is often difficult for Internet service providers to detect botnets because each domain can locally observe only a small part of the botnet; however, exchanging information across domains is challenging because of privacy concerns. We are working on a system that enables domains to share information in a controlled fashion. Each domain establishes a 'privacy budget' and allows other domains to ask queries in a special language; the 'privacy cost' of each query is inferred automatically and deducted from the budget. This is accomplished through a combination of type systems and differential privacy.

Secure network provenance (SNP) is a technique for answering forensic questions about distributed systems in which some nodes have been compromised by an adversary. Each node maintains a record of past states and the (local or remote) causes of state transitions. Thus, if the symptoms of an intrusion are observed, it is possible to trace them back to their root causes or, conversely, to determine how an intrusion on one node has affected other nodes.

Speaker's bio:

Andreas is an Assistant Professor of Computer Science at the University of Pennsylvania. His research interests include distributed systems, networking, social networks, and security. Prior to joining UPenn, Andreas did his doctoral research at the MPI-SWS. This year, he received the Otto Hahn Medal of the Max Planck Society for his research on accountability.



Reverse Traceroute
Arvind Krishnamurthy | University of Washington

2010-06-14, 10:15 - 11:15
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Traceroute is the most widely used Internet diagnostic tool today. Network operators use it to help identify routing failures, poor performance, and router misconfigurations. Researchers use it to map the Internet, predict performance, geolocate routers, and classify the performance of ISPs. However, traceroute has a fundamental limitation that affects all these applications: it does not provide reverse path information. Although various public traceroute servers across the Internet provide some visibility, no general method exists for determining a reverse path from an arbitrary destination.

We address this longstanding limitation by building a reverse traceroute tool. Our tool provides the same information as traceroute, but for the reverse path, and it works in the same case as traceroute, when the user may lack control of the destination. Our approach combines a number of ideas: source spoofing, IP timestamp and record route options, and multiple vantage points. In the median case our tool finds 87% of the hops seen in a directly measured traceroute along the same path, versus only 38% if one simply assumes the path is symmetric, a common fallback given the lack of available tools. We then use our reverse traceroute tool to study previously unmeasurable aspects of the Internet: we uncover more than a thousand peer-to-peer AS links invisible to current topology mapping efforts, we examine a case study of how a content provider could use our tool to troubleshoot poor path performance, and we measure the latency of individual backbone links with, on average, sub-millisecond precision

Speaker's bio:

I received my PhD from UC, Berkeley, was on faculty at Yale, and joined UW faculty in 2005. I work primarily at the boundary between the theory and practice of distributed systems and computer networks. My current research interests include peer-to-peer systems, Internet measurements, systems security, and network protocol design



Exploiting Language Abstraction to Optimize Memory Efficiency
Jennifer B. Sartor | University of Texas, Austin

2010-05-12, 14:00 - 14:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Memory continues to be a bottleneck in modern systems as application complexity and the number of cores have increased, generating more traffic. These complex applications are written largely in high-level managed languages, which offer an opportunity to dynamically optimize memory performance because they virtualize memory management. We explored memory inefficiencies in a Java virtual machine, performing a study of data compression techniques. I will present the results of that study, showing arrays are a dominant source of heap bloat. Focusing on arrays, we found the traditional contiguous layout precludes space optimizations and does not offer memory management time and space bounds. I show how to exploit the opportunities afforded by managed languages to implement an efficient discontiguous array layout with tunable optimization parameters that improve space usage. Having attacked memory performance on the software side, I will then describe new work that takes a cooperative software-hardware approach. I show how a memory manager can communicate regions of dead data to the architecture, allowing it to eliminate useless writes, substantially reducing memory write traffic. My research combines the flexibility and productivity of high-level managed languages with improved memory efficiency that is critical to current and future hardware.

Speaker's bio:

Jennifer B. Sartor received her B.S. in Math and honors Computer Science, with a minor in Spanish, at The University of Arizona in December 2001. She obtained her Masters degree in 2004 and expects her PhD in 2010, both in Computer Science at The University of Texas at Austin. Her PhD research focuses on dynamic memory optimization using high-level managed languages. At PLDI 2009, Jennifer won the ACM Student Research Competition and she won Best Student Presentation at ISMM 2008.



Programmable Self-Adjusting Computation
Ruy Ley-Wild | Carnegie Mellon University

2010-04-27, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In the self-adjusting computation model, programs can respond automatically and efficiently to input changes by tracking the dynamic data dependencies of the computation and incrementally updating the output as needed. After a run from scratch, the input can be changed and the output can be updated via change-propagation, a mechanism for re-executing the portions of the computation affected by the changes while reusing the unaffected parts. Previous research shows that self-adjusting computation can be effective at achieving near-optimal update bounds for various applications. We address the question of how to write and reason about self-adjusting programs.

We propose a language-based technique for annotating ordinary programs and compiling them into equivalent self-adjusting versions. We also provide a cost semantics and a concept of trace distance that enables reasoning about the effectiveness of self-adjusting computation at the source level. To facilitate asymptotic analysis, we propose techniques for composing and generalizing concrete distances via trace contexts (traces with holes). The translation preserves the extensional semantics of the source programs, the intensional cost of from-scratch runs, and ensures that change-propagation between two evaluations takes time bounded by their relative distance.

Speaker's bio:

Ruy Ley-Wild studied math and computing at Carnegie Mellon University, and then continued to pursue a Ph.D. in computer science.



Accurate Analysis of Large Private Datasets
Vibhor Rastogi | University of Washington

2010-04-15, 10:00 - 11:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Today, no individual has full control over access to his personal information. Private data collected by various hospitals and universities, and also by websites like Google and Facebook, contain valuable statistical facts that can be mined for research and analysis, e.g., analyze outbreak of diseases, detect traffic patterns on the road, or understand browsing trends on the web, but concerns about individual privacy severely restricts its use, e.g., privacy attacks led AOL to recently pull-off its published search-log data.

To remedy this, much recent work focuses on data analysis with formal privacy guarantees. This has given rise to differential privacy considered by many as the golden standard of privacy. However, few practical techniques satisfying differential privacy exist for complex analysis tasks (e.g., analysis involving complex query operators), or new data models (e.g., data having temporal correlations or uncertainty). In this talk, I will discuss techniques that fill this void. I will first discuss a query answering algorithm that can handle joins (previously, no private technique could accurately answer join queries arising in many analysis tasks). This algorithm makes several privacy-preserving analyses over social network graphs possible for the first time. Then I will discuss a query-answering technique over time-series data, which enables private analysis of GPS traces and other temporally-correlated data. Third, I will discuss an access control mechanism for uncertain data, which enables enforcing security policies on RFID-based location data. Finally, I will conclude by discussing some privacy and security problems in building next-generation computing systems based on new models for data (e.g., uncertain data), computing (e.g., cloud computing), and human computer interaction (e.g., ubiquitous systems).

Speaker's bio:

Vibhor Rastogi is a doctoral candidate in the Database group at the University of Washington. His dissertation develops techniques for privacy-preserving data analysis. His other research interests include data uncertainty, data cleaning, and problems in large-scale data management.



Binary Program Analysis and Model Extraction for Security Applications
Juan Caballero | Carnegie Mellon University

2010-04-01, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In this talk I present a platform to extract models of security-relevant functionality from program binaries, enabling multiple security applications such as active botnet infiltration, finding deviations between implementations of the same functionality, vulnerability signature generation, and finding content-sniffing cross-site scripting (XSS) attacks. In this talk, I present two applications: active botnet infiltration and finding content-sniffing XSS attacks.

Botnets, large networks of infected computers under control of an attacker, are one of the dominant threats in the Internet, enabling fraudulent activities such as spamming, phishing, and distributed denial-of-service attacks. To build strong botnet defenses, defenders need information about the botnet's capabilities and the attacker's actions. One effective way to obtain that information is through active botnet infiltration, but such infiltration is challenging due to the encrypted and proprietary protocols that botnets use to communicate. In this talk, I describe techniques for reverse-engineering such protocols and present how we use this information to infiltrate a prevalent, previously not analyzed, spam botnet.

Cross-site scripting attacks are the most prevalent class of attacks nowadays. One subtle class of overlooked XSS attacks are content-sniffing XSS attacks. In this talk, I present model extraction techniques and how they enable finding content-sniffing XSS attacks. We use those models to find attacks against popular web sites and browsers such as Wikipedia when accessed using Internet Explorer 7. I describe our defenses for these attacks and how our proposals have been adopted by widely used browsers such as Google Chrome and IE8, as well as standardization groups.

Speaker's bio:

Juan Caballero is a Ph.D. candidate in Electrical and Computer Engineering at Carnegie Mellon University and a visiting student researcher at the EECS department of University of California, Berkeley, under the supervision of his advisor Prof. Dawn Song.

His research interests center on computer security, including security issues in systems, software, and networks. His Ph.D thesis deals with developing binary program analysis techniques to enable security applications such as active botnet infiltration, finding deviations between implementations of the same functionality, signature generation, and finding evasion attacks. His research bridges other disciplines such as networking and programming languages.

Juan is a recipient of the La Caixa fellowship for graduate studies and won the best paper award at the Usenix Security Symposium in 2007. He holds a M.Sc. in Electrical Engineering from the Royal Institute of Technology (KTH) and a Telecommunications Engineer degree from Universidad Politecnica de Madrid (UPM).



Taming the Malicious Web: Avoiding and Detecting Web-based Attacks
Marco Cova | University of California, Santa Barbara

2010-03-29, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The world wide web is an essential part of our infrastructure and a predominant mean for people to interact, do business, and participate to democratic processes. Unfortunately, in recent years, the web has also become a more dangerous place. In fact, web-based attacks are now a prevalent and serious threat. These attacks target both web applications, which store sensitive data (such as financial and personal records) and are trusted by large user bases, and web clients, which, after a compromise, can be mined for private data or used as drones of a botnet.

In this talk, we will present an overview of our techniques to detect, analyze, and mitigate malicious activity on the web. In particular, I will present a system, called Wepawet, which targets the problem of detecting web pages that launch drive-by-download attacks against their visitors. Wepawet visits web pages with an instrumented browser and records events that occur during the interpretation of their HTML and JavaScript code. This observed activity is analyzed using anomaly detection techniques to classify web pages as benign or malicious. We made our tool available as an online service, which is currently used by several thousands of users every month.

We will also discuss techniques to automatically detect vulnerabilities and attacks against web applications. In particular, we will focus on static analysis techniques to identify ineffective sanitization routines and to detect vulnerabilities stemming from the interaction of multiple modules of a web application. These techniques found tens of vulnerabilities in several real-world web applications.

Speaker's bio:

-



Formal Program Verification Through Characteristic Formulae
Arthur Chargueraud | INRIA

2010-03-25, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The characteristic formula of a program is a logical formula that implies any valid post-condition for that program. In this talk, I will explain how to build, in a systematic manner, the characteristic formula of a given purely functional Caml program, and then explain how one can use this formula to verify the program interactively, using the Coq proof assistant. This new, sound and complete approach to the verification of total correctness properties has been applied to the formalization of a number of data structures taken from Chris Okasaki's reference book. My presentation will include demos based on those case studies.

Speaker's bio:

-



Program equivalence and compositional compiler correctness
Chung-Kil Hur | Laboratoire PPS

2010-03-22, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We introduce a notion of program equivalence between different languages and talk about what kind of mathematical techniques are used, how it gives a compositional notion of compiler correctness and more ambitiously how it may be seen as a technique to prove the correctness of programs. We also briefly discuss how these ideas can be formalized and verified in the formal proof assistant Coq.

Speaker's bio:

-



Towards full verification of concurrent libraries
Viktor Vafeiadis | University of Cambridge

2010-03-18, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Modern programming platforms, such as Microsoft's .NET, provide libraries of efficient concurrent data structures, which are used in a wide range of applications.  In this talk, I will discuss some of the pitfalls in implementing such concurrent data structures, what correctness of these libraries means, how one can formally prove that a given library is correct, and the extent to which these proofs can be carried out automatically.

Speaker's bio:

I am a research associate at the University of Cambridge Computer Laboratory, where I did my undergraduate and postgraduate studies. Previously, I was a postdoc researcher at Microsoft Research Cambridge.



Achieving Reliability in Deployed Software Systems
Michael Bond | University of Texas, Austin

2010-03-15, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Software is becoming more complex and concurrent due to demand for features and hardware trends that are leading to more instead of faster cores. In the face of these challenges, developers have trouble writing large, correct, scalable programs. Deployed software inevitably contains bugs, even if it has been thoroughly tested, because it is infeasible to analyze and test all possible inputs, environments, and thread schedules. My research focuses on improving reliability while production software runs, to help prevent, diagnose, and tolerate errors that actually manifest in deployment.

This talk first presents Pacer, a deployable, scalable approach for detecting data races, which are a common and serious type of concurrency bug. Second, I describe techniques for efficiently reporting the calling context (stack trace) of concurrency and other bugs -- essential information for understanding the behavior of complex, modern programs. I conclude with my future plans for developing new analyses and frameworks that make concurrent software reliable.

Speaker's bio:

Michael D. Bond is a postdoctoral fellow in Computer Science at UT Austin. He received his PhD from UT Austin in December 2008, supervised by Kathryn S. McKinley. His research makes software more robust by using dynamic analysis to diagnose and tolerate unexpected errors. Michael's interests include programming languages, runtime systems, compilers, and security. His dissertation received the 2008 ACM SIGPLAN Outstanding Doctoral Dissertation Award.



Improving the Interface between Systems and Cryptography
Thomas Ristenpart | UC San Diego

2010-03-11, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Modern cryptography provides a rigorous mathematical framework for proving the security of cryptographic algorithms. To be effective, however, these mathematical models must accurately reflect the realities of cryptography's use in systems. In this talk, I will address mismatches between theory and use, giving several examples from my work: the problem of credit card number encryption, dealing with bad (cryptographic) randomness, the increasingly diverse applications of cryptographic hash functions, and privacy-preserving device tracking. In each example, problems arise because of gaps between what cryptography offers and what security and privacy demands. To fix these issues, I take an application-oriented approach. This involves modifying cryptography in a theoretically-sound to work better for systems, as well as understanding cryptography's role in broader system security mechanisms.

Looking forward, I will discuss my recent work on new attacks in the setting of cloud computing and my future plans for securing next-generation cloud computing services.

Speaker's bio:

-



Byzantine fault tolerance for cluster services
Allen Clement | University of Texas

2010-03-10, 16:00 - 17:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Experiences with computers and computer systems indicate an inconvenient truth: computers fail, and they fail for a wide range of reasons including power outage, disc and memory corruption, misconfiguration, NIC malfunction, user error, and many others. The impact of computer failures can be dramatic, ranging from unavailability of email to canceled flights and stranded airline passengers to closed companies.

Byzantine fault tolerant (BFT) state machine replication is posited as an approach for masking individual computer faults and deploying robust services that continue to function despite failures. However, the lack of deployed systems that utilize BFT techniques is indicates that the current state of the art falls short of the needs of practical deployments. In this talk I will discuss our efforts to make BFT a practical and attractive option for practitioners. These efforts are centered around developing BFT techniques that (a) are fast and efficient, (b) tolerate Byzantine faults, and (c) can be easily incorporated into legacy applications.

This work is in collaboration with Amit Aiyer, Lorenzo Alvisi, Mike Dahlin, Manos Kapritsos, Yang Wang, and Edmund Wong (UT-Austin), Rama Kotla (currently MSR-SVC), and Mirco Marchetti (currently University of Modena and Reggio Emilia).

Speaker's bio:

Allen Clement is a PhD candidate at the University of Texas at Austin. His research interests include distributed systems, fault tolerance, computer networks, and operating systems. He received an A.B. degree in Computer Science from Princeton University in 2000, and expects to complete the Ph.D. degree in Computer Science form the University of Texas at Austin shortly.



Compositional Shape Analysis by means of Bi-Abduction
Cristiano Calcagno | Imperial College, London

2010-03-08, 11:00 - 12:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We describe a compositional shape analysis, where each procedure is analyzed independently of its callers. The analysis uses an abstract domain based on a restricted fragment of separation logic, and assigns a collection of Hoare triples to each procedure; the triples provide an over-approximation of data structure usage. Compositionality brings its usual benefits - increased potential to scale, ability to deal with unknown calling contexts, graceful way to deal with imprecision - to shape analysis, for the first time. The analysis rests on a generalized form of abduction (inference of explanatory hypotheses) which we call bi-abduction. Bi-abduction displays abduction as a kind of inverse to the frame problem: it jointly infers anti-frames (missing portions of state) and frames (portions of state not touched by an operation), and is the basis of a new interprocedural analysis algorithm. We have implemented our analysis algorithm and we report case studies on smaller programs to evaluate the quality of discovered specifications, and larger programs (e.g., an entire Linux distribution) to test scalability and graceful imprecision.

Speaker's bio:

-



Algorithms for Parallel Cache Hierarchies
Guy Blelloch | Carnegie Mellon University

2010-03-05, 15:00 - 16:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Saarbrücken building E1 4, room 24

Abstract:

Cache hierarchies in multicore computers are quite complex consisting of many levels of both shared and private caches. Designing algorithms and applications for such caches can be very complicated although often necessary to get good performance. We discuss approaches of capturing the locality of a parallel algorithm at a high-level requiring little work by the algorithm designer or programmer and then using this along with an appropriate thread scheduler to get good cache performance on a variety of parallel cache configurations. I will describe results for private caches, shared caches and some new results on multiple level cache hierarchies. In all cases the same algorithms can be used, but the scheduler needs to be changed. The approach makes use of ideas on cache oblivious algorithms.

Speaker's bio:

Guy Blelloch is a Professor of Computer Science at Carnegie Mellon. His research interests are in programming languages and algorithms and how they interact with an emphasis on parallel computation. Blelloch designed and implemented the parallel programming language NESL, a language designed for easily expressing and analyzing parallel algorithms and has worked on issues in scheduling, algorithm design, cache efficiency, garbage collection, and synchronization primitives.



Verifying Functional Programs with Type Refinements
Joshua Dunfield | McGill University

2010-03-03, 14:00 - 15:00
Saarbrücken building MPI SWS, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Types express properties of programs; typechecking is specification checking. But the specifications expressed by conventional type systems are imprecise. Type refinements enable programmers to express more precise properties, while keeping typechecking decidable.

I present a system that unifies and extends past work on datasort and index refinements. Intersection and union types permit a powerful type system that requires no user input besides type annotations. Instead of seeing type annotations as a burden or just as a shield against undecidability, I see them as a desirable form of machine-checked documentation. Accordingly, I embrace the technique of bidirectional typechecking for everything from dimension types to first-class polymorphism.

My implementation of this type system, for a subset of Standard ML, found several bugs in the SML/NJ data structure libraries.

Speaker's bio:

Joshua Dunfield received his PhD from Carnegie Mellon University in 2007 for his work on type refinements, intersection and union types. He is presently a postdoctoral fellow at McGill University in Montreal. His research interests include type-based verification, typed compilation, dependent types, functional programming, proof assistants and programming environments.



Partial Replication in Large Networks
Nicolas Schiper | University of Lugano

2010-02-16, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The popularity of Internet applications such as e-banking, on-line stores, and social networks has tremendously increased in the past years. As a consequence, our daily lives depend on computers more and more each day. These applications typically run on many machines spread across several geographical sites, or groups, to provide low response times to clients. In these complex systems, hardware and software failures are common. Providing high availability without sacrificing performance is thus of a prime importance. In this talk, we present protocols to achieve high availability through data replication. We proceed in two steps. We first devise fault-tolerant atomic multicast algorithms that offer message ordering guarantees, and allow messages to be addressed to any subset of groups. We then build a partial replication protocol that relies on this multicast service. The four atomic protocols presented differ in which properties they ensure, namely disastertolerance and genuineness. With the former property, entire groups may contain faulty computers; with the latter, to deliver a message m, protocols only involve the groups addressed by m. Performance at the multicast layer is obtained by minimizing the latency to deliver multicast messages. At the replication layer, data is partitioned among groups. This allows for more scalability than found in traditional replicated systems since sites only handle a fraction of the workload. We show how convoy effects can appear in partially replicated systems and propose techniques to reduce these effects. Experimental evaluations compare the proposed solutions.

Speaker's bio:

Nicolas Schiper holds a MSc in computer science from EPFL and recently defended his PhD thesis which was supervised by prof. Fernando Pedone at the University of Lugano in Switzerland. He is currently a post-doc in the same institution.

His main research interests are distributed systems and fault-tolerance.



Fences in Weak Memory Models
Jade Alglave | Inria

2010-02-08, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We present an axiomatic framework, implemented in Coq, to define weak memory models w.r.t. several parameters: local reorderings of reads and writes, and visibility of inter and intra processor communications through memory, including full store atomicity relaxation. Thereby, we give a formal hierarchy of weak memory models, in which we provide a formal study of what should be the action and placement of fences to restore a given model such as Sequential Consistency from a weaker one. Finally, we provide a tool, diy, that tests a given machine to determine the architecture it exhibits. We detail the results of our experiments on Power and the model we extract from it. This identified an implementation error in Power 5 memory barriers (for which IBM is providing a software workaround); our results also suggest that Power 6 does not suffer from this problem.

Speaker's bio:

-



Distributed Key Generation and its Applications
Aniket Kate | University of Waterloo

2010-02-03, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor

Abstract:

Although distributed key generation (DKG) has been studied for some time, it has never been examined outside of the synchronous communication setting. In this talk, I will present the first practical and provably secure asynchronous DKG protocol and its implementation for use over the Internet. I will also discuss cryptographic properties such as uniform randomness of the shared secret, and will provide proactive security and group modification primitives. Notably, this asynchronous DKG protocol requires a set agreement protocol, and implements it using a leader-based Byzantine agreement scheme.

In the second half of the talk, I will describe applications of the DKG protocol in designing distributed private-key generators (PKGs) for identity-based cryptography (IBC), a pairing-based onion routing (PB-OR) circuit construction and two robust communication protocols in distributed hash tables. Looking in detail at PB-OR, I will describe a provably secure privacy-preserving key agreement scheme in the IBC setting with distributed PKG and use it to design an efficient and compact onion routing circuit construction that is secure in the universal composability framework.

Speaker's bio:

-



A Sensor-Based Framework for Kinetic Data
Sorelle Friedler | University of Maryland

2010-02-02, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We introduce a framework for storing and processing kinetic data observed by sensor networks, such as highway traffic or migratory birds. These sensor networks generate vast quantities of data, which motivates a significant need for data compression. We are given a set of sensors, each of which continuously monitors some region of space. We are interested in the kinetic data generated by a finite set of objects moving through space, as observed by these sensors. Our model relies purely on sensor observations; it allows points to move freely and requires no advance notification of motion plans. Sensor outputs are represented as random processes, where nearby sensors may be statistically dependent. We model the local nature of sensor networks by assuming that two sensor outputs are statistically dependent only if the two sensors are among the k nearest neighbors of each other. We present an algorithm for the lossless compression of the data produced by the network. We show that, under the statistical dependence and locality assumptions of our framework, asymptotically this compression algorithm encodes the data to within a constant factor of the information-theoretic lower bound optimum dictated by the joint entropy of the system.

We also present an efficient algorithm for answering spatio-temporal range queries. Our algorithm operates on a compressed representation of the data, without the need to decompress it. We analyze the efficiency of our algorithm in terms of two natural measures of information content, the statistical and empirical joint entropies of the sensor outputs. We show that with space roughly equal to entropy, queries can be answered in time that is roughly logarithmic in entropy. These results represent the first solution to range searching problems over compressed kinetic sensor data and set the stage for future statistical analysis.

This is joint work with David Mount.

Speaker's bio:

Sorelle Friedler is a Ph.D. candidate in the Department of Computer Science at the University of Maryland. She received her M.S. from the University of Maryland in 2007 and her B.A. from Swarthmore College in 2004. Her main research interest is on algorithms for geometric problems and she is currently interested in creating algorithms for calculating statistical properties of moving points. Other research interests include linear programming, programming languages, and computational analysis of educational data. She is the recipient of numerous awards, including the AT&T Labs Fellowship, awarded yearly to 5 graudate students chosen from a US national pool.



Fault-tolerant partial replication at large-scale
Marco Shapiro and Pierre Sutra | LIP6

2010-01-11, 15:00 - 16:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Experience with many applications (e.g., web caching, P2P sharing, and grid and cloud computing) shows that data replication is a fundamental feature of large systems. Replication improves performance, availability and dependability.

Replication algorithms based on state machine replication are attractive because they maintain a simple sequential semantics. However, we believe they will not scale to massive cloud and peer-to-peer systems. To be successful, future algorithms must: (1) support multi-object transactions across distant data centres, (2) leverage the semantics of data accesses, and (3) support partial replication.

In the first part of this talk, we describe two variants of Generalized Paxos, a solution to consensus that leverages the commutativity semantics. Our algorithms reduce message delay when a collision occurs between non-commuting operations. In the second part, we present a new approach to partial replication of database systems at large scale. Previous protocols either reexecute transactions entirely and/or compute a total order of transactions. In contrast, ours applies update values, and generate a partial order between mutually conflicting transactions only.

Speaker's bio:

-



Reinventing The Desktop
Brad Chen | Google Inc.

2009-12-15, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Saarbrücken building E1 4, room 007

Abstract:

Desktop software, in the form of web browsers, browser features, and OS distributions, are a growing area of engineering activity at Google. This talk will give an overview of this work, looking in detail at Native Client as an example project in the space. Native Client is an open-source technology for running untrusted native code in web applications, with the goal of maintaining the browser neutrality, OS portability, and safety that people expect from web apps. It supports performance-oriented features generally absent from web application programming environments, such as thread support, instruction set extensions such as SSE, and use of compiler intrinsics and hand-coded assembler. We combine these properties in an open architecture designed to leverage existing web standards, and to encourage community review and 3rd-party tools. Overall, Google's desktop efforts seek to enable new Web applications, improve end-user experience, and enable a more flexible balance between client and server computing. Google has open sourced many of our desktop efforts, in part to encourage collaboration and independent innovation.

Speaker's bio:

J. Bradley Chen manages the Native Client project at Google, where he has also worked on cluster performance analysis projects. Prior to joining Google, he was Director of the Performance Tools Lab in Intel's Software Products Division. Chen served on the faculty of Harvard University from 1994-1998, conducting research in operating systems, computer architecture and distributed system, and teaching a variety of related graduate and undergraduate courses. He has published widely on the subjects of systems performance and computer architecture. Dr. Chen has bachelors and masters degrees from Stanford University and a Ph.D. from Carnegie Mellon University.



Characterization of an Online Social Aggregation Service
Anirban Mahanti | NICTA

2009-12-07, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Many Web users have accounts with multiple different social networking services. This has prompted development of services that aggregate information available through various services. This talk will consider one such aggregation service, FriendFeed. The first part of the talk will consider questions such as what types of services users aggregate content from, the relative popularity of services, who follows the aggregated feeds, and why. The second part of the talk will focus on factors such as proximity, common interests, time spent in the system, and combinations thereof, and their influence in link formation. Results based on data collected from FriendFeed between September 2008 and May 2009 will be presented. This talk is based on joint work with Martin Arlitt (HP Labs/University of Calgary), Niklas Carlsson (University of Calgary), Sanchit Garg (IIT Delhi), and Trinabh Gupta (IIT Delhi).

Speaker's bio:

Anirban Mahanti is a Senior Researcher at NICTA, Australia. He holds a B.E. in Computer Science and Engineering from the Birla Institute of Technology (at Mesra), India, and a M.Sc. and a Ph.D. in Computer Science from the University of Saskatchewan, Canada. His research interests include network measurement, TCP/IP protocols, performance evaluation, and distributed systems



Improving the Privacy of Online Social Networks and Cloud Computing
Stefan Saroiu | Microsoft Research

2009-12-04, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor

Abstract:

This talk has two parts. In the first part, I will present Lockr, a system that improves the privacy of online social networks (OSNs). Lockr offers three significant privacy benefits to OSN users. First, it separates social networking content from all other functionality that OSNs provide. This decoupling puts users in control of their own social information: they decide which OSN providers should store it, which third parties should have access to it, or they can even choose to manage it themselves. Second, Lockr ensures that digitally signed social relationships needed to access social data cannot be re-used by an OSN site for unintended purposes. This feature drastically reduces the value of the social content that users entrust to their OSN providers. Finally, Lockr enables message encryption using a social relationship key. This key lets two strangers with a common friend verify their relationship without exposing it to others, a common privacy threat when sharing data in a decentralized scenario.

In the second part, I will present our ongoing work in improving privacy when users run their code in the infrastructure. This is becoming an increasingly common scenario as cloud computing and mobile computing are becoming more popular. I will start by discussing these scenarios, after which I will present our current work in minimizing the attack surface exposed to malicious users and operators in VM-based cloud environments.

Lockr is joint work with Alec Wolman (MSR), Amin Tootoonchian, and Yashar Ganjali (U. of Toronto). For more information or to download our Lockr implementations for Flickr and for BitTorrent, please visit http://www.lockr.org. The second part is work-in-progress that is jointly done with Alec Wolman (MSR) and Shravan Rayanchu (U. of Wisconsin).

Speaker's bio:

Stefan Saroiu is a researcher in the Networking Research Group at Microsoft Research in Redmond. Stefan's research interests span large-scale distributed systems, mobile systems, and computer security. Before coming to MSR in 2008, Stefan spent three years writing papers, teaching, and advising students, which is pretty much what the job of an Assistant Professor at the University of Toronto is. Before that, Stefan spent four months at Amazon.com measuring their workload and participating in the early stages of the design of their new shopping cart service (aka Dynamo). Stefan finished his Ph.D. in 2004 at the University of Washington where he was advised by Steve Gribble and Hank Levy.



Sierra: a power-proportional, distributed storage system
Eno Thereska | Microsoft Research

2009-12-02, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

I'll present the design, implementation, and evaluation of Sierra: a power-proportional, distributed storage system. I/O workloads in data centers show significant diurnal variation, with peak and trough periods. Sierra powers down storage servers during the troughs. The challenge is to ensure that data is available for reads and writes at all times, including power-down periods. Consistency and fault-tolerance of the data, as well as good performance, must also be maintained. Sierra achieves all these through a set of techniques including power-aware layout, predictive gear scheduling, and a replicated short-term versioned store. Replaying live server traces from a large e-mail service (Hotmail) shows power savings of at least 23%, and analysis of load from a small enterprise shows that power savings of up to 60% are possible.

Speaker's bio:

I've broad interests in systems. Currently I've been focusing on file systems and storage technologies and high-performance data centers. I also have great interest in applying machine learning and queuing analysis to help simplify and automate system management. Since September 2007 I have been a Researcher at Microsoft Research in Cambridge, UK. I received my PhD/MS/BS from Carnegie Mellon University.



BorgCube: Rethinking the data center
Ant Rowstron | Microsoft Research

2009-11-30, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The BorgCube project explores how to combine ideas from high performance computing, networking and distributed systems to create future data centers. A BorgCube has a tight integration between servers, networking and services, achieved by not using any switches or routers. Instead, servers are directly connected to a small set of other servers. The BorgCube uses a k-ary 3-cube (or 3D Torus) physical topology, providing good failure resilience with reduced cost.

More importantly, we believe this also creates a platform on which it is easier to build the large-scale core distributed services, e.g. GFS, BigTable and Dynamo, which underpin many of the applications that run in large-scale data centers. Each server has a coordinate creating a namespace, similar to the namespace provided by a structured overlay. However, unusually for an overlay, the physical network topology and virtual topology are the same. This namespace can be exploited by the services, and all functionality is implemented on top of a link-orientated low-level API. We have implemented many services on our prototype BorgCube, including a bootstrap service, several multi-hop routing protocols, and a service supporting unmodified TCP/IP applications allowing them to run on the BorgCube.

In this talk I will describe the ideas behind BorgCube, and explain some of our early experiences with writing services for the BorgCube. This is joint work with Paolo Costa and Greg O'Shea.

Speaker's bio:

-



Differential attacks on PIN Processing APIs
Graham Steel | ENS-Cachan

2009-10-08, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor

Abstract:

International standards dictate that all processing of customer Personal Identification Numbers (PINs) in the international cash machine network must take place inside special tamper resistant Hardware Security Modules (HSMs). These HSMs have a restricted API designed so that even if an attacker is able to make arbitrary command calls to the device, no customer PINs can be obtained. However, in recent years, a number of attacks have been found on these APIs. Some of them are so-called differential attacks, whereby an attacker makes repeated calls to the API with slightly different parameters, and from the pattern of error messages received, he is able to deduce the value of a PIN. In this talk, I will present some of these attacks, and talk about efforts to analyse them formally. This will include techniques for proving the absence of such attacks in patched APIs, and a practical proposal for improving the security of the network without making large-scale changes to the current infrastructure.

Speaker's bio:

Graham Steel holds a degree in Mathematics from the University of Cambridge and a PhD in Informatics from the University of Edinburgh. After post-doctoral research positions in Germany, Italy, Scotland and France, he became an INRIA researcher at the Specification and Verification Laboratory (LSV) of the ENS-Cachan in 2008. He specialises in the analysis of security of APIs for cryptographic modules used in critical devices such as cash machines, USB security tokens and smartcards.



Bruce Maggs | Duke University

2009-09-18, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor

Abstract:

This talk shows how operators of Internet-scale distributed systems, such as Google, Microsoft, and Akamai can reduce electricity costs (but not necessarily energy consumption) by dynamically allocating work among data centers in response to fluctuating energy prices. The approach applies to systems consisting of fully replicated clusters of servers installed in diverse geographical locations where energy can be purchased through spot markets. Using historical energy prices for major energy markets in the United States, as well as usage data from Akamai's content delivery network, we should how much can be saved now, and what might be saved in the future given server technology trends.

Joint work with Asfandyar Quershi, Rick Weber, Hari Balakrishnan, and John Guttag.

Speaker's bio:

Bruce Maggs received the S.B., S.M., and Ph.D. degrees in computer science from the Massachusetts Institute of Technology in 1985, 1986, and 1989, respectively. His advisor was Charles Leiserson. After spending one year as a Postdoctoral Associate at MIT, he worked as a Research Scientist at NEC Research Institute in Princeton from 1990 to 1993. In 1994, he moved to Carnegie Mellon, where he stayed until joining Duke University in 2009 as a Professor in the Department of Computer Science. While on a two-year leave-of-absence from Carnegie Mellon, Maggs helped to launch Akamai Technologies, serving as its Vice President for Research and Development, before returning to Carnegie Mellon. He retains a part-time role at Akamai as Vice President for Research.

Maggs's research focuses on networks for parallel and distributed computing systems. In 1986, he became the first winner (with Charles Leiserson) of the Daniel L. Slotnick Award for Most Original Paper at the International Conference on Parallel Processing, and in 1994 he received an NSF National Young Investigator Award. He was co-chair of the 1993-1994 DIMACS Special Year on Massively Parallel Computation and has served on the steering committees for the ACM Symposium on Parallel Algorithms and Architectures (SPAA) and ACM Internet Measurement Conference (IMC), and on the program committees of numerous ACM conferences including STOC, SODA, PODC, and SIGCOMM.



A complete characterization of observational equivalence in polymorphic lambda-Calculus with general references
Eijiro Sumii | Tohoku University, Sendai, Japan

2009-09-10, 14:00 - 15:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Saarbrücken building E1 4, room 019

Abstract:

We give the first sound and complete proof method for observational equivalence in full polymorphic lambda-calculus with existential types and first-class, higher-order references. Our method is syntactic and elementary in the sense that it only employs simple structures such as relations on terms. It is nevertheless powerful enough to prove many interesting equivalences that can and cannot be proved by previous approaches such as logical relations.

Speaker's bio:

-



*Looking over the evolution of Internet workloads*
Virgilio Almeida | Federal University of Minas Gerais, Brazil

2009-09-04, 11:00 - 12:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The Internet has a number of popular applications and services experiencing workloads with very different, non-trivial and unique properties. The emergence of new applications and computing models (e.g., online social networking, streaming video, games and cloud computing), and the explosive growth in popularity of others (e.g., search, peer-to-peer, e-business, malware), most of which with workloads with not fully understood fundamental properties, make this a research topic of timely relevance. Real workload characterization and modeling provide key insights into the cost-effective design, operation, and management of Internet-based services and systems. This talk looks over the evolution of Internet workloads, by presenting an overview of a variety of real Internet workloads and how they have evolved through the years; from system workload to social workload. It shows the main properties of these workloads and discusses the invariants across different types of workloads. It outlines methodologies and techniques used in workload characterization and modeling. Constructing a model involves tradeoffs between usefulness and accuracy. The talk shows how characterization techniques have been used to capture the most relevant aspects of Internet workloads while keeping the model as simple as possible. The talk concludes showing some examples of how workload models have been used to design efficient Web systems and services.

Speaker's bio:

Virgílio Almeida is a professor of computer science at the Federal University of Minas Gerais, Brazil. He has held visiting-professor positions at Boston University and Polytechnic University of Catalunya, Barcelona, as well as visiting appointments at Xerox PARC and Hewlett-Packard Research Laboratory and Polytechnic Institute of NYU. His research interests include models to analyze the behavior of large-scale distributed systems. Almeida is a recipient of a Fulbright Research Scholar Award and is a full member of the Brazilian Academy of Sciences. He is also an International Fellow of the Santa Fe Institute for 2008/2009 and a member of the editorial board of Internet Computing and First Monday. Almeida is the author of more than 100 technical papers and co-author (with Danny Menasce) of four books, including Performance By Design and Capacity Planning for Web Services: Metrics, Models, and Methods, published by Prentice Hall and translated into three languages.





Ratul Mahajan | Microsoft Research, Redmond

2009-07-13, 14:00 - 15:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:



Through extensive measurements of wireless connectivity from moving vehicles, I find that packet loss creates significant performance issues for interactive applications. This poor performance exists for both WLAN technologies (e.g., WiFi) and WWAN technologies (e.g., 3G and WiMax). Unlike wired networks, in wireless networks priority-based queuing is not sufficient to reduce packet loss for loss-sensitive applications. I propose that losses should instead be masked through aggressive but controlled use of available redundancy, and I describe two such systems. The first system, called ViFi, targets the use of WiFi from moving vehicles. Current WiFi handoff methods, in which clients communicate with one base station at a time, lead to frequent disruptions in connectivity. ViFi leverages the presence of redundant in-range base stations to reduce disruptions and improve application performance. The second system, called PluriBus, targets the use of 3G or WiMax from moving vehicles. PluriBus leverages the spare capacity in the wireless channel using a novel erasure coding method. In my experiments, each system improves the performance of interactive applications by at least a factor of 2.

Speaker's bio:

Ratul Mahajan is a Researcher at Microsoft Research. His research interests include all aspects of networked systems, especially their architecture and design. His work spans Internet routing and measurements, incentive-compatible protocol design, practical models for wireless networks, and vehicular networks. He has published several highly-cited papers in top-tier venues such as SIGCOMM, SOSP, and NSDI. He is a winner of the SIGCOMM best paper award, the William R. Bennett Prize, and Microsoft Research Graduate Fellowship. He obtained his Ph.D. from the University of Washington (2005) and B.Tech. from Indian Institute of Technology, Delhi (1999).



Rethinking Storage Space Management in High-Performance Computing Centers
Ali R. But | Virginia Tech

2009-06-08, 16:00 - 17:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Modern scientific applications, such as computer models for analyzing data from particle colliders or space observatories, process data that is exponentially growing in size. High-performance computing (HPC) centers that support such applications are now faced with a data deluge, which can no longer be managed using ad hoc approaches in use today. Consequently, a fundamental reevaluation of the data management tools and techniques is required. In this talk, I will describe a fresh approach to HPC storage space management, especially for the center scratch space --- a high speed storage used for servicing currently running and soon to run applications --- which effectively treats the storage as a tiered cache and provide comprehensive integrated storage management. I will discuss how the caching model is achieved, and how its mechanisms are supported through just-in-time staging and timely offloading of data. Finally, I will show how this approach can also mitigate the effects of center storage failures. The overall goal is to improve HPC center serviceability and resource utilization.

Speaker's bio:

Ali R. Butt is an Assistant Professor of Computer Science at Virginia Tech, USA. Ali received the Ph.D. in Electrical and Computer Engineering from Purdue University in 2006. His research interests are in experimental computer systems, especially in file and storage systems. His current work focuses on I/O and storage issues of modern High Performance Computing systems and data-intensive computing. Ali is the recipient of NSF CAREER Award (2008), IBM Faculty Award (2008), and a Virginia Tech College of Engineering "Outstanding New Assistant Professor" Award (2009).



Online Social Networks and Applications: a Measurement Perspective
Ben Y. Zhao | UC Santa Barbara

2009-05-28, 13:00 - 14:00
Saarbrücken building E1 5, room 5th floor

Abstract:

With more than half a billion users worldwide, online social networks such as Facebook are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of Internet applications that integrate relationships from social networks to improve security and performance. But can these applications be effective in real life? And if so, how can we predict their effectiveness when they are deployed on real social networks?

In this talk, we will describe recent research that tries to answer these questions using measurement-based studies of online social networks and applications. Using measurements of a socially-enhanced web auction site, we show how social networks can actually reduce fraud in online transactions. We then discuss the evaluation of social network applications, and argue that existing methods using social graphs can produce to misleading results. We use results from a large-scale study of the Facebook network to show that social graphs are insufficient models of user activity, and propose the use of "interaction graphs" as a more accurate model. We construct interaction graphs from our Facebook datasets, and use both types of graphs to validate two well-known social-based applications (Reliable Email and SybilGuard). Our results reveal new insights into both systems and confirm our hypothesis that choosing the right graph model significantly impacts predictions of application performance.

Speaker's bio:

Ben Zhao is a faculty member at the Computer Science department, U.C. Santa Barbara. Before UCSB, he completed his M.S. and Ph.D. degrees in Computer Science at U.C. Berkeley, and his B.S. from Yale University. His research interests include networking, security and privacy and distributed systems.

He is a recipient of the National Science Foundation's CAREER award, MIT Tech Review's TR-35 Award (Young Innovators Under 35), and is one of ComputerWorld's Top 40 Technology Innovators.



Self-Stabilizing Autonomic Recoverers
Olga Brukman | Ben-Gurion University

2009-05-20, 15:00 - 16:30
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Saarbrücken building E1 4, room 019

Abstract:

In this talk I will cover research conducted towards my PhD dissertation. This dissertation introduces theoretical foundations for system architectures and  algorithms for creating truly robust autonomic systems -- systems that are able to recover automatically from unexpected failures. Our approaches complement each other starting with the case of given black-box systems, continuing with the process of  developing  new systems, and concluding with  the case of automatic creation of recovery-oriented software.

In the first part we consider software packages to be black boxes. We propose modeling software package flaws (bugs) by assuming eventual Byzantine behavior of the package. A general, yet practical, framework and paradigm for the monitoring and recovery of systems called autonomic recoverer is proposed. The framework receives task specific requirements in the form of safety and liveness predicates and recovery actions. The autonomic recoverer uses a new scheme for liveness assurance via on-line monitoring that complements known schemes for on-line safety assurance.

In the second part we consider a software package to be a transparent box and introduce the recovery oriented programming: programs will include important safety and liveness properties and recovery actions as an integral part of the program. We design a pre-compiler that produces augmented code for monitoring the properties and executing the recovery actions upon a property violation. Assuming the restartability property of a given program, the resulting code is able to overcome safety and liveness violations. We provide a correctness proof scheme for proving that the code produced by the pre-compiler from the program code combined with the important properties and recovery actions fulfills its specifications when started in an arbitrary state.

Finally, in the third part we consider a highly dynamic environment, which typically implies that there are no realizable specifications for the environment, i.e., there does not exist a program that respects the specifications for every given environment.  In such cases the predefined recovery action may not suffice and a dramatic change in the program is required. We suggest to search for a program in run time by trying  all possible programs on plant replicas in parallel, where the plant is the relevant part of the environment.  We present control search algorithms for various settings plant state settings (reflection and ability to set plant to a certain state).

Speaker's bio:

-



Traveling to Rome: a retrospective on the journey
John Wilkes | Google, Mountain View California

2009-05-15, 15:00 - 16:00
Saarbrücken building E1 5, room Wartburg, 5th floor

Abstract:

Starting in 1994/5, the Storage Systems Program at HP Labs embarked on a decade-long journey to automate the management of enterprise storage systems by means of a technique we initially called attribute-managed storage. The key idea was to provide declarative specifications of workloads and their needs, and of storage devices and their capabilities, and to automate the mapping of one to the other. One of many outcomes of the project was a specification language we called Rome - hence the title of this talk, which offers a retrospective on the overall approach and some of the lessons we learned along the way.

Speaker's bio:

John joined HP Labs in 1982 with a PhD from Cambridge University where his thesis work won the BCS Technology Award and the Computer Journal's [Maurice] Wilkes prize. He became an HP Fellow and an ACM Fellow in 2002, with wide-ranging interests in distributed systems and self-managing systems; he's most well-known for his work on storage management, for which he was given the HP Labs Birnbaum prize in 2003.

He is an Adjunct Professor at Carnegie Mellon University; has participated in about a dozen top-tier program committees; been program chair for SOSP'99 and EuroSys'09; and was an assistant editor for ACM TOCS. John has authored or co-authored about 40 refereed publications and submitted about 60 invention disclosures to HP, of which about half have so far been granted patents. (His publications can currently be found at http://www.e-wilkes.com/john/work-publications.html< http://www.e-wilkes.com/john>.)

He joined Google full-time in mid-November 2008, is based in Mountain View, CA.

In his spare time he continues, stubbornly, trying to learn how to blow glass.



Balachander Krishnamurthy | AT&T Labs - Research

2009-05-15, 11:00 - 12:00
Saarbrücken building E1 5, room Wartburg, 5th floor

Abstract:

For the last few years we have been examining the leakage of privacy on the Internet from one specific angle: how information related to individual users is aggregated as they browse seemingly unrelated Web sites. Thousands of Web sites across numerous categories, countries, and languages were studied to generate a privacy "footprint". This talk reports on our longitudinal study consisting of multiple snapshots of our examination of such diffusion over four years. We examine the various technical ways by which third-party aggregators acquire data and the depth of user-related information acquired. We study techniques for protecting privacy diffusion as well as limitations of such techniques. We introduce the concept of secondary privacy damage.

Our results show increasing aggregation of user-related data by a steadily decreasing number of entities. A handful of companies are able to track users' movement across almost all of the popular Web sites. Virtually all the protection techniques have significant limitations highlighting the seriousness of the problem and the need for alternate solutions.

I will also talk about a recent discovery of large-scale leakage of personally identifiable information (PII) via Online Social Networks (OSN). Third-parties can link PII with user actions both within OSN sites and elsewhere on non-OSN sites.

Speaker's bio:

Balachander Krishnamurthy has been with AT&T Labs--Research since his PhD. His main focus of research of late is in the areas of Internet privacy, Online Social Networks, and Internet measurements. He has authored and edited ten books, published more than 75 technical papers, holds twenty patents, and has given invited talks in over thirty countries. He co-founded the successful Internet Measurement Conference and Steps to Reducing Unwanted Traffic on the Internet workshop. In 2008 he co-founded the ACM SIGCOMM Workshop on Online Social Networks. He has been on the thesis committee of several PhD students, collaborated with over seventy researchers worldwide, and given tutorials at several industrial sites and conferences.

His most recent book "Internet Measurements: Infrastructure, Traffic and Applications" (525pp, John Wiley & Sons, co-authored with Mark Crovella), was published in July 2006 and is the first book focusing on Internet Measurement.



Formal Verification of Realistic Compilers
Zaynah Dargaye | ENSIIE, Evry

2009-05-13, 15:00 - 16:30
Saarbrücken building E1 5, room Wartburg, 5th floor / simultaneous videocast to Saarbrücken building E1 4, room 019

Abstract:

It is generally expected that the compiler is semantically transparent, that is, produces executable code that behaves as prescribed by the semantics of the source code. However, compilers -- and especially optimizing compilers, which attempt to increase the efficiency of generated code through program analyses and transformations -- are highly complex software systems that perform delicate symbolic computations. Bugs in the compiler are therefore unavoidable : they can cause wrong code to be produced from a correct source code

The CompCert project investigates the formal verification of realistic compilers usable for critical embedded software.  Such verified compilers come with a mathematical, machine-checked proof that the generated executable code behaves exactly as prescribed by the semantics of the source program.  By ruling out the possibility of compiler-introduced bugs, verified compilers strengthen the guarantees that can be obtained by applying formal methods to source programs.

This talk describes the formal verification of compilers and focuses on two experimentation : the C CompCert compiler and the MLCompCert compiler. MLCompCert is the mechanically verified compiler for the purely functional fragment of ML that I have developed and verified in Coq in my Ph. D. thesis.

Speaker's bio:

-



Fingerprinting performance crises in the datacenter
Moises Goldszmidt | Microsoft Research, Silicon Valley

2009-05-11, 16:00 - 17:00
Saarbrücken building E1 5, room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

We propose a method for significantly reducing troubleshooting and diagnosis time in the datacenter by automatically generating fingerprints of performance crises, enabling fast classification and recognition of recurring instances. We evaluated the approach on data from a production datacenter with hundreds of machines running a 24x7 enterprise-class user-facing application, verifying each identification result with the operators of the datacenter (and the corresponding troubleshooting tickets). The approach has 80% identification accuracy in the operations-online setting with time to identification below 15 minutes (on average) after the start of the crises (operators stipulated a deadline of 60 minutes). In an offline setting, where some parameters can be fitted optimally, the accuracy is on the 95%-98% range. After explaining the fingerprinting method and the results, I will end the talk with a discussion on the possibility of predicting the crises, and on extending this work to model the operator's repair actions for learning models of automated decision making. Joint work with Peter Bodik and Armando Fox from UC Berkeley, and Hans Andersen from Microsoft.

Speaker's bio:

Moises Goldszmidt is a principal researcher in Microsoft Research (Silicon Valley Campus). His research interests include probabilistic reasoning, graphical models, statistical machine learning, and systems. Prior to Microsoft, Moises held similar positions with Hewlett-Packard Labs, SRI International, and Rockwell Science Center, and was a principal scientist with Peakstone Corporation (start-up). Dr. Goldszmidt has a PhD degree in Computer Science from the University of California in Los Angeles (1992). Since 1999, Moises has been focusing his research on the application of statistical pattern recognition and probabilistic reasoning to the modeling, diagnosis, performance forecasting, and control of distributed networked systems.



JavaScript Isolation and Web Security
John C. Mitchell | Stanford University

2009-05-04, 16:00 - 17:00
Saarbrücken building E1 4, room 019

Abstract:

Web sites that incorporate untrusted content may use browser-or language-based methods to keep such content from maliciously altering pages, stealing sensitive information, or causing other harm. We use accepted methods from the study of programming languages to investigate language-based methods for filtering and rewriting JavaScript code, using Facebook's FBJS as a motivating example.

We explain the core problems by describing previously unknown vulnerabilities and shortcomings, provide JavaScript code that enforces provable isolation properties at run-time, and develop a foundation for improved solutions based on an operational semantics of the full ECMA262 language. We also compare our results with the techniques used in FBJS.

Joint work with Sergio Maffeis and Ankur Taly

Speaker's bio:

http://theory.stanford.edu/people/jcm/



Theory Plus Practice in Computer Security : Radio FrequencyIdentification and Whitebox Fuzzing
David Molnar | Univeristy of California at Berkeley

2009-04-29, 16:00 - 17:00
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

I will describe two areas in computer security that demonstrate the wide range of techniques, from both theory and practice, we need to make impact. First, I treat privacy and security in Radio Frequency Identification (RFID). RFID refers to a range of technologies where a small device with an antenna, or "tag" is attached to an item and can be queried later wirelessly by a reader. While proponents of RFID promise security and efficiency benefits, the technology also raises serious security concerns. I will describe my work on practical security analysis of RFID in library books and the United States e-passport deployments. These deployments in turn uncover a new theoretical problem, that of "scalable private authentication." I will describe the first solution to this problem that scales sub-linearly in the number of RFID tags.

Second, I describe recent work in "whitebox fuzz testing," a new approach to finding security bugs. Security bugs cost millions of dollars to patch after the fact, so we want to find and fix them as early in the deployment cycle as possible. I review previous fuzz testing work, how fuzzing has been responsible for serious security bugs, and classic fuzz testing's inability to deal with "unlikely" code paths. I then show how marrying the idea of dynamic test generation with fuzz testing overcomes these shortcomings, but raises significant scaling problems. Two recent tools, SAGE at Microsoft Research, and SmartFuzz at Berkeley, overcome these scaling problems; I present results on the effectiveness of these tools on commodity Windows and Linux media playing software. Finally, I close with directions for leveraging cloud computing to improve developers' testing and debugging experience.

The talk describes joint work with Ari Juels and David Wagner (RFID), and with Patrice Godefroid, Michael Y. Levin, and Xue Cong Li and David Wagner (Whitebox Fuzzing).

Speaker's bio:

David Molnar is a PhD candidate at the University of California, Berkeley, degree expected Spring 2009. His work centers on privacy, cryptography, and computer security, advised by David Wagner. Most recently, he has been interested in RFID privacy, and in applying constraint solvers to finding software bugs at scale (see http://www.metafuzz.com). He is a previous National Science Foundation Graduate Fellow and Intel Open Collaboration Research Graduate Fellow.



Sachin Katti | University of California at Berkeley

2009-04-27, 16:00 - 17:00
Kaiserslautern building G26, room 204 / simultaneous videocast to Saarbrücken building E1 5, room Wartburg OG 5

Abstract:

Wireless is becoming the preferred mode of network access. The performance of wireless networks in practice, however, is hampered due to the harsh characteristics of the wireless medium: its shared broadcast nature, interference, and high error rate. Traditionally, network designers have viewed these characteristics as problematic, and tried to work around them. In this talk, I will show how we can turn these challenges into opportunities that we exploit to significantly improve performance.

To do so, we use a simple yet fundamental shift in network design. We allow routers to "mix" (i.e., code) packets' content before forwarding them. We built three systems, COPE, ANC and MIXIT, which exploit this network coding functionality via novel algorithms to provide large practical gains. In this talk, I will discuss COPE and ANC; COPE exploits wireless broadcast, while ANC exploits strategic interference to improve throughput.

This work bridges and contributes to two unrelated areas: network coding and wireless mesh network design. It lays down the algorithmic framework for using network coding in modern wireless networks, by designing algorithms which work with the common case of unicast flows in dynamic and unknown environments. It also provides the first implementation, deployment and experimental evaluation of network coding. For wireless mesh networks, it shows how the framework of network coding allows us to profitably harness the inherent wireless characteristics. This union ultimately allows us to deliver a several-fold increase in wireless throughput.

Speaker's bio:

Sachin Katti is currently a postdoctoral scholar at U.C.Berkeley. He recently received his PhD in EECS from MIT in September, 2008. Before coming to MIT, he received his B.Tech. in Electrical Engineering from the Indian Institute of Technology, Bombay, in 2003. His dissertation research focuses on redesigning wireless networks with network coding as the central unifying design paradigm. The dissertation won the George Sprowls Award for Best Doctoral Dissertation in EECS at MIT and has been nominated for the ACM Doctoral Dissertation Award. His work on network coding was also awarded a MIT Deshpande Center Grant. His research interests are in networks, wireless communications, applied coding theory and security.



Neelakantan R. Krishnaswami | CMU Computer Science Department at Pittsburgh

2009-04-20, 13:30 - 14:30
Saarbrücken building E1 5, room Wartburg OG 5 / simultaneous videocast to Saarbrücken building E1 4, room 019

Abstract:

O'Hearn and Reynolds' separation logic has proven to be a very successful attempt at taming many of the difficulties associated with reasoning about aliased, mutable data structures. Using it, researchers have given correctness proofs of even quite intricate low-level imperative programs such as garbage collectors and device drivers.

However, high level languages such as ML and Haskell also give programmers access to mutable, aliased data, and when those features are used, programmers are still prone to all the troubles state is heir to. In fact, many problems become more complex, since these languages encourage the use of an abstract, higher-order style, and support the design of libraries that rely on higher-order functions as well as callbacks (ie, references to functions in the heap).

In this talk, I'll describe work I've done (in collaboration with my PhD supervisors) designing a version of separation logic suitable for use in languages such as ML, and describe an application of this logic to formally verifying the correctness of a small library for writing event-driven programs in a lazy dataflow style. This then allows an efficient imperative implementation of a functional reactive programming library.

Speaker's bio:

-



Self - Ajusting Computation
Umut A. Acar | Toyota Technological Institute at Chicago

2009-04-16, 16:00 - 17:00
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 204/206

Abstract:

Many application domains require computations to interact with data sets that change slowly or incrementally over time. For example, software systems that interact with the physically changing world, e.g., databases, graphical systems, robotic software systems, program-development environments, scientific-computing applications, must respond efficiently and correctly to changes as they occur in the world. Since incremental modifications to data are often small, they can be processed asymptotically faster than re-computing from scratch, often generating orders of magnitude speedups in practice. Realizing this potential using traditional techniques, however, often requires complex algorithmic and software design and implementation, ultimately limiting the range of problems that can effectively be addressed in practice.

In this talk, I present an overview of advances on self-adjusting computation: an approach to developing software systems that interact with changing data. I start by presenting the principle ideas behind a mechanism for propagating a change through a computation, and then describe the design of programming-language constructs for enabling computations to respond automatically and efficiently to modifications to their data. I show that these language constructs are realistic by describing how to extend existing languages with them and how to compile the extended languages into provably efficient executables, whose performance properties can be analyzed via cost semantics. To evaluate the effectiveness of the approach, I consider a number of benchmarks as well as more sophisticated applications from diverse areas such as computational geometry, scientific computing, machine learning, and computational biology. Our results show that self-adjusting computation can be broadly effective in achieving efficient implementations, solving open problems, and pointing to new research directions. In practice, our measurements show that self-adjusting programs often respond to incremental modifications by a linear factor faster than recomputing from scratch, resulting in orders of magnitude speedups.

Speaker's bio:

Umut Acar is an Assistant Professor at Toyota Technological Institute. He received his Ph.D. from Carnegie Mellon University (2005), his M.A. from University of Texas at Austin (1999), and his B.S. from Bilkent University, Turkey (1997). His research interests include language design and implementation, particularly for dynamic systems that interact with changing data from various sources such as users and the physical environment.



Programming with Hoare Type Theory
Aleksandar Nanevski | Microsoft Research, Cambridge, UK

2009-04-14, 16:00 - 17:00
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 204/206

Abstract:

Two main properties make type systems an effective and scalable formal method. First, important classes of programming errors are eliminated by statically enforcing the correct use of values. Second, types facilitate modular software development by serving as specifications of program components, while hiding the component's actual implementation. Implementations with the same type can be interchanged, thus facilitating software reuse and evolution.

Mainstream type systems focus on specifying relatively simple properties that admit type inference and checking with little or no input from the programmer. Unfortunately, this leaves a number of properties, including data structure invariants and API protocols outside of their reach, and also restricts the practical programming features that can be safely supported. For example, most simply-typed languages cannot safely allow low-level operations such as pointer arithmetic or explicit memory management.

In this talk, I will describe Hoare Type Theory (HTT) which combines dependent types of a system like Coq with features for specification and verification of low-level stateful operations in the style of Hoare and Separation Logic.

Such a combination is desirable for several reasons. On the type-theoretic side, it makes it possible to integrate stateful behaviour into dependent type theories that have so far been purely functional. On the Hoare Logic side, it makes is possible to use the higher-order data abstraction and information hiding mechanisms of type theory, which are essential for scaling the verification effort.

I will discuss the implementation of HTT, verification of various examples that I have carried out, as well as the possibilities for extending HTT to support further programming features such as concurrency.

Speaker's bio:

I am a postdoc researcher at Microsoft Research, Cambridge, working on formal software verification, theorem proving and type theory. Before that, I graduated from University of Skopje, Macedonia, and wrote my dissertation at CMU under Frank Pfenning. Most of my spare time, I spend with my wife Emi, trying to keep our 3-year old daughter Kalina happy.



Functional Programming Perspectives on Concurrency and Parallelis
Matthew Fluet | Toyota Technological Institute at Chicago

2009-03-23, 16:00 - 17:00
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

The trend in microprocessor design toward multicore processors has sparked renewed interest in programming languages and language features for harnessing concurrency and parallelism in commodity applications. Past research efforts demonstrated that functional programming provides a good semantic base for concurrent- and parallel-language designs, but slowed in the absence of widely available multiprocessor machines. I will describe new functional programming approaches towards concurrency and parallelism, grounded in more recent functional programming research.

To frame the discussion, I will introduce the Manticore project, an effort to design and implement a new functional language for parallel programming. Unlike some earlier parallel language proposals, Manticore is a heterogenous language that supports parallelism at multiple levels. In this talk, I will describe a number of Manticore's notable features, including implicitly-parallel programming constructs (inspired by common functional programming idioms) and a flexible runtime model that supports multiple scheduling disciplines. I will also take a deeper and more technical look at transactional events, a novel concurrency abstraction that combines first-class synchronous message-passing events with all-or-nothing transactions. This combination enables elegant solutions to interesting problems in concurrent programming. Transactional events have a rich compositional structure, inspired by the use of monads for describing effectful computations in functional programming. I will conclude with future research directions in the Manticore project, aiming to combine static and dynamic information for the implementation and optimization of parallel constructs.

Speaker's bio:

-



Serge Egelman | Carnegie Mellon

2009-03-19, 16:00 - 17:00
Saarbrücken building E1 5, room 019 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

In a world where making an incorrect online trust decision can mean the difference between checking your account balance and transferring it to criminals, Internet users need effective security warnings to help them identify risky situations. In a perfect world, software could automatically detect all security threats and then block access to high risk websites. Because there are many threats that we cannot detect with 100% accuracy and false positives are all too frequent, web browser vendors generally opt to warn users about security threats. In this talk I cover the common pitfalls of web browser security warnings and draw parallels with literature in the warning sciences. I describe the results of two laboratory phishing studies I performed in order to examine users' mental models, risk perceptions, and comprehension of current security warnings. Finally, I show how I used these findings to iteratively design and test a more usable SSL warning that clearly conveys risk and uses context to minimize habituation effects.

Speaker's bio:

Serge Egelman is a PhD student within Carnegie Mellon University's School of Computer Science. His main research area is on usable privacy and security, which has included work on phishing detection, authentication systems, online privacy, user account models, and online shopping behaviors. His dissertation is on design patterns for creating effective online trust indicators, which are based on user studies that he's done on privacy policies, phishing warnings, and SSL error messages. Serge was a summer intern at PARC in 2006, as well as an intern at Microsoft Research for six months in 2008. While at MSR, he helped the IE team redesign the IE8 phishing warning based on the results of his research. Serge enjoys traveling the world and hopes to visit every UNESCO World Heritage Site. Though his more recent pastimes center around graduating and applying for jobs.



Logical Relations: A Step Towards More Secure and Reliable Software
Amal Ahmed | Toyota Technological Institute Chicago

2009-03-10, 16:00 - 17:00
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Logical relations are a powerful proof technique for establishing many important properties of programs, programming languages, and language implementations. In particular, they provide a convenient method for proving behavioral equivalence of programs. Hence, they may be used to show that one implementation of an abstract data type (ADT) can be replaced by another without affecting the behavior of the rest of the program; to prove that security-typed languages satisfy noninterference, which requires that confidential data not affect the publicly observable behavior of a program; and to establish the correctness of compiler optimizations.

Yet, despite three decades of research and much agreement about their potential benefits, logical relations are still primarily used to reason about languages that are not even Turing complete. The method has not scaled well to many features found in practical programming languages: support for recursive types (lists, objects) and mutable memory (as in Java or ML) requires sophisticated mathematical machinery (e.g., domain theory, category theory), which makes the resulting logical relations cumbersome to use and hard to extend. Mutable memory is particularly problematic, especially in combination with features like generics (parametric polymorphism) and ADTs.

In this talk, I will describe *step-indexed* logical relations which support all of the language features mentioned above and yet permit simple proofs based on operational reasoning, without the need for complicated mathematics. To illustrate the effectiveness of step-indexed logical relations, I will discuss three applications. The first is a secure multi-language system where we show that code written in different languages may interoperate without sacrificing the abstraction guarantees provided by each language in isolation. The second is imperative self-adjusting computation, a system for efficiently updating the results of a computation in response to changes to some of its inputs; we show that our update mechanism is consistent with a from-scratch run. The third is security-preserving compilation, which ensures that compiled code is no more vulnerable to attacks launched at the level of the target language than the original code is to attacks launched at the level of the source language; we show that the typed closure conversion phase of a compiler has this property.

Speaker's bio:

Amal Ahmed is a Research Assistant Professor at the Toyota Technological Institute at Chicago. She received her Ph.D. in Computer Science from Princeton University in 2004 and spent two years at Harvard as a Postdoctoral Fellow before joining TTI-C in 2006. She holds an A.B. in Computer Science and Economics from Brown University and an M.S. in Computer Science from Stanford. Her research interests lie in programming languages and language-based security, including type theory, semantics, and secure compilation, with a focus on advanced type systems and proof methods for reasoning about mutable state.



Bruce Allen | Max-Planck-Institut fuer Gravitationsphyik

2009-02-25, 11:00 - 12:00
Saarbrücken building E1 5, room conf.room 5th floor / simultaneous videocast to Kaiserslautern building G26, room 204/206

Abstract:

My research group at the MPI for Gravitational Physics, Hannover, operates a large computer cluster used for data analysis. In principle, the most cost-effective and highest-bandwidth data storage available is the disks local to the compute nodes. In the case of Atlas, there are 1680 disks with an aggregate capacity of 840 TB. With a suitable file system the array of these disks could form a highly reliable storage system, however at the moment there appears to be no open-source distributed file system with the necessary RAID-like properties. In this talk I present a list and description of the properties that such a file-system should have, and arguments to support the case that it should be possible to achieve this in the real world. If such a file system were available, I believe that thousands of large computer clusters around the world would employ and benefit from it.

Speaker's bio:

The speaker is a physicist, not a computer scientist, and knows almost nothing about file systems. Interaction from the audience in the form of questions, comments and flames is desirable. Without this, due to lack of genuine content, the talk will be dull and will probably end ahead of schedule



Stochastic Games in Synthesis and Verification
Krishnendu Chatterjee | Univeristy of California at Berkeley

2009-02-24, 16:00 - 17:00
Saarbrücken building E1 4, room 019 / simultaneous videocast to Kaiserslautern building G26, room 206

Abstract:

Dynamic games played on game graphs with winning conditions specified as automata provide the theoretical framework for the study of controller synthesis and many problems related to formal verification. Besides synthesis and verification, these games have been used in several other contexts such as checking interface compatibility, modular reasoning, checking receptiveness. In this talk we first present different game models, that are suited to different applications, and the canonical winning conditions that can be specified as automata. We first consider the strictly competitive (zero-sum) game formulation that is appropriate for controller synthesis. We present a brief overview of the field, summarizing the classical results, and then present our results that significantly improve the complexity for several classes of games. We also present practical algorithms for analysis of several classes of such games.

We then consider the problem of multi-process verification and argue that the zero-sum formulation is too strong for multi-process verification. This is because the environment for a process is typically other processes with their own specifications. On the other hand, the notion of Nash equilibria, that captures the notion of rational behavior in absence of external criteria, is too weak for multi-process verification. We will present a new notion of equilibrium, which we call secure equilibrium. We will show how the new notion of equilibrium is more appropriate for multi-process verification, discuss the existence and computation of such equilibrium for graph games.

Speaker's bio:

-



Robots, Molecules and Physical Computing
Lydia E. Kavraki | Rice University

2009-02-02, 15:00 - 16:00
Saarbrücken building E1 5, room 019 / simultaneous videocast to Kaiserslautern building G26, room 204/206

Abstract:

The field of computing is increasingly expected to solve complex geometric

problems arising in the physical world. Such problems can be found in applications ranging from robotics planning for industrial automation to molecular modeling for studying biological processes. This talk will first describe the development of a set of algorithmic tools for robot motion planning which are often grouped under the name sampling-based algorithms.

Emphasis will be placed on recent results for systems with increased physical realism and complex dynamics. The talk will then discuss how the experience gained through sampling-based methods in robotics has led to algorithms for characterizing the flexibility of biomolecules for drug discovery. A new trend in Computer Science is presented in this talk. It concerns the development of algorithmic frameworks for addressing complex high-dimensional geometric problems arising, at different scales, in the physical world. The challenges of physical computing will be highlighted as well as the opportunities to impact molecular biology and medicine.

Speaker's bio:

Lydia E. Kavraki is the Noah Harding Professor of Computer Science and Professor of Bioengineering at Rice University. She also holds a joint appointment at the Department of Structural and Computational Biology and Molecular Biophysics at the Baylor College of Medicine in Houston. Kavraki received her B.A. in Computer Science from the University of Crete in Greece and her Ph.D. in Computer Science from Stanford University working with Jean-Claude Latombe. Her research contributions are in physical algorithms and their applications in robotics (robot motion planning, hybrid systems, assembly planning, micromanipulation, and flexible object manipulation) and computational structural biology and bioinformatics (modeling of proteins and biomolecular interactions, computer-assisted drug design and the large-scale functional annotation of proteins). Kavraki has authored more than 100 peer-reviewed journal and conference publications and is one of the authors of a new robotics textbook titled `Principles of Robot Motion' published by MIT Press. She is currently a member of the editorial advisory board of the Springer Tracts in Advanced Robotics, an associate editor for the IEEE Transactions on Computational Biology and Bioinformatics and for the Computing Surveys. Kavraki is the recipient of the Association for Computing Machinery (ACM) Grace Murray Hopper Award for her technical contributions. She has also received an NSF CAREER award, a Sloan Fellowship, the Early Academic Career Award from the IEEE Society on Robotics and Automation, a recognition as a top young investigator from the MIT Technology Review Magazine, and the Duncan Award for excellence in research and teaching from Rice University. Kavraki is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the American Institute for Medical and Biological Engineering (AIMBE) and a Fellow of the World Technology Network. She currently serves as a Distinguished Lecturer for the IEEE Robotics and Automation Society. Current projects at Kavraki's laboratory are described in http://www.kavrakilab.org. More information on Kavraki's work can be found in: http://www.cs.rice.edu/~kavraki.



Dejan Kostic | EPFL

2008-11-21, 14:00 - 15:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:

Distributed systems form the foundation of our society's infrastructure. Complex distributed protocols and algorithms are used in enterprise storage systems, distributed databases, large-scale planetary systems, and sensor networks. Errors in these protocols translate to denial of service to some clients, potential loss of data, and even monetary losses. Unfortunately, it is notoriously difficult to develop reliable high-performance distributed systems that run over asynchronous networks, such as the Internet. Even if a distributed system is based on a well-understood distributed algorithm, its implementation can contain coding bugs and errors arising from complexities of realistic distributed environments.

This talk describes CrystalBall, a new approach for developing and deploying distributed systems. In CrystalBall, nodes predict distributed consequences of their actions, and use this information to detect and avoid errors. Each node continuously runs a state exploration algorithm on a recent consistent snapshot of its neighborhood and predicts possible future violations of specified safety properties. We describe a new state exploration algorithm, consequence prediction, which explores causally related chains of events that lead to property violation. Using CrystalBall, we identified new bugs in mature Mace implementations of a random overlay tree, BulletPrime content distribution system, and the Chord distributed hash table. Furthermore, we show that if the bug is not corrected during system development, CrystalBall is effective in steering the execution away from inconsistent states at run-time, with low false negative rates.

Speaker's bio:

Dejan Kostic obtained his Ph.D. in Computer Science at the Duke University, under Amin Vahdat. He spent the last two years of his studies and a brief stay as a postdoctoral scholar at the University of California, San Diego. He received his Master of Science degree in Computer Science from the University of Texas at Dallas, and his Bachelor of Science degree in Computer Engineering and Information Technology from the University of Belgrade (ETF), Serbia. In January 2006, he started as a tenure-track assistant professor at the School of Computer and Communications Sciences at EPFL (Ecole Polytechnique Fédérale de Lausanne), Switzerland. His interests include Distributed Systems (Peer to Peer Computing, Overlay Networks), Computer Networks, Operating Systems, and Mobile Computing.



Defending Networked Resources Against Floods of Unwelcome Requests
Michael Walfish | University of Texas and University College London

2008-11-14, 14:00 - 15:00
Saarbrücken building E1 5, room Rotunda 6th floor

Abstract:

The Internet is afflicted by unwelcome "requests", defined broadly as claims on a scarce resource, such as a server's CPU (in the case of spurious traffic whose purpose is to deny service) or a human's attention (in the case of spam). Traditional responses to these problems apply heuristics: they try to identify "bad" requests based on their content (e.g., in the way that spam filters analyze an email's text). This talk argues that heuristics are inherently gameable and that defenses should instead aim to allocate resources proportionally to all clients (so if, say, 10% of the requesters of some scarce resource are "bad", those clients should be limited to 10% of the resources).

To meet this goal, this talk presents two systems. The first is a denial-of-service mitigation in which clients are encouraged to automatically send *more* traffic to a besieged server. The "good" clients can thereby compete equally with the "bad" ones. The second is a distributed system for enforcing per-sender email quotas to control spam. This system scales to a workload of millions of requests per second, tolerates arbitrary faults in its constituent hosts, and resists a variety of attacks. It achieves this fault-tolerance despite storing only one copy (roughly) of any given datum and, ultimately, does a fairly large job with fairly little mechanism.

Speaker's bio:

-



Ricardo Jimenez-Peris | TU Madrid

2008-11-13, 14:00 - 15:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:

Database replication has received a lot of attention during the last decade. This wave of research has concentrated on how to obtain scalable database replication. This talk will be devoted to the most recent advances on how to attain scalable database replication contributed by the Distributed Systems Lab (LSD) at Universidad Politenica de Madrid. It will address the most important bottlenecks that limit scalability such as isolation level, degree of replication, recovery, and engineering issues. The talk will cover techniques to overcome these bottlenecks. Special emphasis will be put on the role of snapshot isolation as enabling factor of large scalability to overcome the scalability limit of serializability. The talk will also cover other techniques building upon snapshot isolation to overcome other scalability bottlenecks such as partial replication.

Speaker's bio:



The Evolution of Java(TM) software on GNU/Li
Dalibor Topic | Sun Microsystems

2008-09-18, 15:30 - 17:00
Saarbrücken building E1 5, room Rotunda, 6th floor

Abstract:

The inclusion of OpenJDK 6 into the core of Fedora, Debian, OpenSuse and Ubuntu has enabled millions of GNU/Linux users to easily obtain the latest version of the Java SE platform. This has changed the packaging landscape for Java software, since ISVs and distro builders can now rely on the Java platform being available out of the box. Planned enhancements to the Java programming language aim to further simplify packaging by making Java software more modular and more explicit about its dependencies.

Speaker's bio:

Dalibor Topic is the lead developer of the kaffe.org virtual machine project, and serves on the OpenJDK Interim Governance Board. After spending a couple of years working closely with independent Free Software JVM projects around GNU Classpath and with GNU/Linux distributors on turning the idea of Open Source Java into reality, he is now working for Sun Microsystems in Hamburg as Sun's Java F/OSS Ambassador.



FlightPath: Obedience vs. Choice
Harry Li | UT Austin

2008-09-08, 10:00 - 11:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:

In this talk, I will present FlightPath, a novel peer-to-peer streaming application that provides a highly reliable data stream to a dynamic set of peers. FlightPath offers a more stable stream than previous works by several orders of magnitude. I will explain the techniques we use to maintain such stability despite peers that act maliciously and selfishly.

More broadly, this talk will discuss the core of FlightPath's success: approximate equilibria. I will highlight how these equilibria let us rigorously design incentives to limit selfish behavior, yet also provide the flexibility to build practical systems. Specifically, I will show how we use epsilon-Nash equilibria to engineer a live streaming system to use bandwidth efficiently, absorb flash crowds, adapt to sudden peer departures, handle churn, and tolerate malicious activity.

Speaker's bio:

-



Geoffrey Washburn | EPFL, Switzerland

2008-08-27, 16:00 - 17:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:

The Scala language (http://www.scala-lang.org/) aims to unify object-oriented and functional programming, while maintaining full interoperability with the Java language. However, while Scala has been under active development since 2003, there has yet to be a satisfactory formal model of Scala. There are several calculi that come close, but all have discrepancies in expressive power, some are lacking complete proofs, and some are unsound.

In this talk, I will give a short introduction to Scala, review several calculi that fall short of providing a formal model of Scala, and give an overview of the calculus I have been developing, Scala Classic, that will help fill this gap in the foundations of Scala.

Speaker's bio:

-



Verifying C++ programs that use the STL
Daniel Kroening | Oxford University

2008-07-25, 11:00 - 12:00
Saarbrücken building E1 5, room 007

Abstract:

We describes a flexible and easily extensible predicate abstraction-based approach to the verification of STL usage, and observe the advantages of verifying programs in terms of high-level data structures rather than low-level pointer manipulations. We formalize the semantics of the STL by means of a Hoare-style axiomatization. The verification requires an operational model conservatively approximating the semantics given by the C++ standard. Our results show advantages (in terms of errors detected and false positives avoided) over previous attempts to analyze STL usage.

Speaker's bio:

Daniel Kroening received the M.E.~and doctoral degrees in computer science from the University of Saarland in 1999 and 2001, respectively. He joined the Model Checking group in the Computer Science Department at Carnegie Mellon University, Pittsburgh PA, USA, in 2001 as a Post-Doc.

He was an assistant professor at the Swiss Technical Institute (ETH) in Zurich, Switzerland, from 2004 to 2007. He is now a lecturer at the Computing Laboratory at Oxford University. His research interests include automated formal verification of hardware and software systems, decision procedures, embedded systems, and hardware/software co-design.



Practical pluggable types for Java
Michael Ernst | MIT

2008-07-17, 15:00 - 17:00
Saarbrücken building E1 5, room Rotunda 6th floor

Abstract:

This talk introduces the Checker Framework, which supports adding pluggable type systems to the Java language in a backward-compatible way. A type system designer defines type qualifiers and their semantics, and a compiler plug-in enforces the semantics. Programmers can write the type qualifiers in their programs and use the plug-in to detect or prevent errors. The Checker Framework is useful both to programmers who wish to write error-free code, and to type system designers who wish to evaluate and deploy their type systems.

The Checker Framework includes new Java syntax for expressing type qualifiers; declarative and procedural mechanisms for writing type-checking rules; and support for flow-sensitive local type qualifier inference and for polymorphism over types and qualifiers. The Checker Framework is well-integrated with the Java language and toolset.

We have evaluated the Checker Framework by writing 5 checkers and running them on over 600K lines of existing code. The checkers found real errors, then confirmed the absence of further errors in the fixed code. The case studies also shed light on the type systems themselves.

Speaker's bio:

-



Integrating Formal Verification into the Model-based Development of Adaptive Embedded Systems
Ina Schaefer | TU Kaiserslautern

2008-07-17, 14:00 - 15:00
Saarbrücken building E1 5, room 007

Abstract:

Model-based development of adaptive embedded systems is an approach to deal with the increased complexity adaptation imposes on system design. Integrating formal verification techniques into this design process providesmeans to rigorously prove critical properties. However, most verification tools are based on foundational models, e.g. automata, unable to express intuitive notions used in model-based development appropriately. Furthermore, automatic methods such as model checking are only efficiently applicable to systems of limited sizes due to the state-explosion problem. Our approach to alleviate these problems uses a semantics-based integrationof model-based development and formal verification for adaptive embedded systems allowing to capture design-level models at a high level of abstraction. Verification complexity induced by the applied modelling concepts is reduced by verified model transformations. These transformations include model slicing, data domain abstractions and compositional reasoning techniques. The overall approach as well as the model transformations have been evaluated together with the development of an adaptive vehicle stability control syste

Speaker's bio:

-



The Expandable Network Disk
Athicha Muthitacharoen | MIT

2008-07-14, 16:00 - 17:00
Saarbrücken building E1 5, room Rotunda 6th floor

Abstract:

In this talk, I will present my recent work on the Expandable Network Disk (END). END aggregates storage on a cluster of machines into a single virtual disk. END's main goals are to offer good performance during normal operation, and to resume operation quickly after changes in the cluster, specifically machine crashes, reboots, and additions.

END achieves these goals using a two-layer design, in which storage ``bricks'' hold two kinds of information. The lower layer stores replicated immutable ``chunks'' of data, each indexed by a unique key. The upper layer maps each block address to the key of its current content; each mapping is held on two bricks using primary-backup replication. This separation allows END flexibility in where it stores chunks and thus efficiency: it writes new chunks to bricks chosen for speed, it moves only address mappings (not data) when bricks fail and recover, it fully replicates new writes when a brick is unavailable, and it uses chunks on a recovered brick without risk of staleness.

The END prototype's write throughput on a cluster of 16 PC-based bricks is 150 MByte/s with 2x replication, about 70% of the aggregate throughput of the underlying hardware. END continues after a single brick failure, re-incorporates a rebooting brick, and expand to include a new brick after a few seconds of reduced performance during each change. (Joint work with Robert Morris.)

Speaker's bio:

Athicha Muthitacharoen is a Ph.D. candidate in Computer Science at MIT. She also received her B.S. and M.Eng. degrees from MIT. She is interested in distributed systems, especially in the areas of storage, fault-tolerance, and online social networks.



Ct and Pillar: Building a Foundation for Many-Core Programming
Neal Glew | Intel Corporation

2008-07-03, 15:00 - 16:00
Saarbrücken building E1 5, room Rotunda 6th floor

Abstract:

Seemingly fundamental limitations in hardware manufacturing are driving an industry-wide move away from speedy but complex single core processors towards simpler but massively parallel many-core processors. The job of discovering parallelism (and hence achieving performance) on these new chips is left to software: that is, the programmers and their tools. Parallel programming has traditionally been a specialty area requiring extensive expertise, and non-deterministic concurrency introduces vast new classes of exceptionally difficult to eliminate bugs. In short, the job of programming becomes much harder on many-core processors. In order for programmers to cope successfully with these challenges, the software tools used to program many-core processors must take a giant leap forward. Specifically, programming abstractions and languages must be designed to allow programmers to easily express parallelism in a way that is scalable, performant, and most of all, correct. This talk discusses the problem in more detail, and describes two projects aimed at supporting this goal. The Pillar implementation language is a C-like high level compiler target language intended to provide the key sequential and concurrent constructs needed to efficiently implement task-parallel languages, while the Ct language is a system for exploiting the key area of data-parallelism using ideas from functional programming. Together, these two systems provide a foundation upon which a wide variety of abstractions and languages for expressing parallelism can be built.

Speaker's bio:

Neal got his PhD in computer science from Cornell University in 2000 before going to work for InterTrust Technologies Corporation. He shifted to Intel in 2002, where he is today, and worked on Java virtual machines, typed assembly language, and parallel languages.

Neal was the co-recipient this year of the Most Influential POPL Paper (10 years later) award, for his POPL'98 paper with Morrisett, Walker, and Crary, "From System F to Typed Assembly Language."



Disjunctive Invariants for Modular Static Analysis
Corneliu Popeea | National University of Singapore

2008-06-30, 17:00 - 18:00
Saarbrücken building E1 5, room Rotunda 6th floor

Abstract:

We study the application of modular static analysis to prove program safety and detection of program errors. In particular, we shall consider imperative programs that rely on numerical invariants.

To handle the challenges of disjunctive analyses, we introduce the notion of affinity to characterize how closely related is a pair of disjuncts. Finding related elements in the conjunctive (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) base domain. We have implemented a static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost. Our second objective is to support either a proof of the absence of bugs in the case of a valid program or bug finding in the case of a faulty program. We propose a dual static analysis that is designed to track concurrently two over-approximations: the success and the failure outcomes. Due to the concurrent computation of outcomes, we can identify two significant input conditions: a never-bug condition that implies safety for inputs that satisfy it and a must-bug condition that characterizes inputs that lead to true errors in the execution of the program.

Speaker's bio:

Corneliu Popeea is a PhD candidate in the School of Computing at the National University of Singapore. He is supervised by Prof Chin Wei-Ngan. He has received his B.Sc. in computer science from the University Politehnica of Bucharest (Romania) in 2001. His research interests lie in programming languages and software engineering. More specifically, he worked on disjunctive fixed point analysis, inference of pre/postconditions and type systems for object-oriented languages.



Liuba Shrira | Brandeis University

2008-06-18, 10:30 - 11:30
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:



Kurzweil says, computers will enable people to live forever and doctors will be doing backup of your memories by late 2030. This talk is not about that, yet. Instead, the remarkable drop in disk costs makes it possible and attractive to retain past application states and store them for a long time for "time-travel". A still open question is how to best organize long-lived past state storage? Split snapshots are a recent approach to virtualized past states that is attractive for several reasons. Split snapshots are persistent, can be taken with high-frequency, and theyare transactionally consistent. Unmodified database application code can run against them. Like no other approach, they provide low-cost discriminated garbage collection of unneeded snapshots, a useful feature in long-lived systems. Lastly, compared to a temporal database, split snapshots are significantly simpler and more general since they virtualize disk blocks rather than logical records.

Several novel techniques underly split snapshots. An extended recovery invariant allows to create consistent copy-on-write snapshots without blocking, a new kind of persistent index provides fast snapshot access, and a new snapshot storage organization incrementally garbage collects selected copy-on-write snapshots without copying and without creating disk-fragmentation. Measurements of a prototype system indicate that the new techniques are efficient and scalable, imposing minimal ($4\%$) performance penalty on a storage system, on expected common workloads. (Joint work with Ross Shaull and Hao Xu)

Speaker's bio:

Liuba Shrira is an Associate Professor in the Computer Science Department at Brandeis University, and is affiliated with the Computer Science and Artificial Intelligence Laboratory at MIT. From 1986 to 1997 she was a researcher in the MIT/LCS Programming Methodology Group joining Brandeis in 1997. In 2004-2005 she was a visiting researcher at Microsoft Research, Cambridge, UK.

Her research interests span aspects of design and implementation of distributed systems and especially storage systems. This includes fault-tolerance, availability and performance issues. Her recent focus is on long-lived transactional storage, time travel (in storage), software upgrades, byzantine faults, and support for collaborative access to long-lived objects.





Rob Sherwood | University of Maryland

2008-05-29, 15:00 - 16:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:



Despite its increasing importance in our lives, the Internet remains insecure and its global properties unknown. Spam, phishing, and Denial of Service (DoS) attacks have become common, while global properties as basic as the router-connectivity graph continue to elude researchers. Further, these two problems are inter-related: curtailing abuse exposes gaps in knowledge of the Internet's underlying structure, and studying the underlying structure exposes new techniques to curtail abuse. My research leverages this insight by working on both securing and understanding the Internet.

In this talk, I first discuss my work in securing the Internet by describing Opt-Ack, a DoS attack on the network using optimistic acknowledgments. With this attack, malicious TCP receivers "optimistically" acknowledge packets they did not receive and cause unwitting TCP senders to flood the network. Using Opt-Ack, the resulting traffic flood is hundreds to millions of times the attacker's true bandwidth. I demonstrate randomly skipped segments, an efficient and incrementally deployable solution to the Opt-Ack attack.

Second, I describe my work in understanding the Internet with DisCarte, a constraint-solving system that infers the Internet router-connectivity graph. DisCarte uses disjunctive logic programming to cross-validate topology information from TTL-limited traceroute probes and the often ignored IP Record Route option against observed network engineering practices. Compared to previous techniques, router-connectivity graphs produced by DisCarte are more accurate and contain more features.

Speaker's bio:

Rob Sherwood is completing his Ph.D. in Computer Science from the University of Maryland. His work is in networking and security, and is advised by Bobby Bhattacharjee and Neil Spring. Rob has worked on many aspects of network security including anonymous communications, fair file sharing, Denial-of-Service prevention, and reputation-based trust. He obtained his B.S. from the University of Maryland and is a member of the Association for Computing Machinery (ACM).

Rob Sherwood is a PostDoc candidate



Specification and Analysis of C(++) programs with Frama-C and ACSL.
Virgile Prevosto | CEA, Saclay

2008-05-27, 16:00 - 17:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:

D Frama-C is a framework dedicated to build static analyzers for C programs and make them cooperate to achieve better results. One important component of this cooperation is the ANSI/ISO C Specification Language (ACSL), a JML-like language allowing to formally specify the behavior of C functions and statements. The analyses can indeed generate verification conditions as ACSL formulas that will be taken as input by the other ones (and hopefully discharged at some point). This talk will first present the existing Frama-C analyses. Then, we will have a look at the main ACSL constructions. Last, we will show how these constructions can be used for the specification of existing code, and how Frama-C could be extended to deal with C++.

Speaker's bio:

-



Rethinking bulk data transfers for next-generation applications
Himabindu Pucha | School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania

2008-04-30, 11:00 - 12:30
Saarbrücken building E1 5, room 019 / simultaneous videocast to Uni Kaiserslautern building 34, room 217

Abstract:

How did you use the Internet today? The answer to this question has significantly evolved in the last decade. Ten years ago, we were browsing simple websites with text and images, and communicating via instant messaging and emails. In addition to these applications, today's users are engaging in on-demand video streaming, multimedia conferencing, and sharing files from software updates to personal music, and as a result transferring large volumes of data (of the order of Mbytes) more frequently than ever. Hence, bulk data transfers at the core of these applications are becoming increasingly important and are expected to provide high throughput and efficiency. Contrary to these expectations, however, our study of file sharing networks confirms previous observations that bulk data transfers are slow and inefficient, motivating the need to rethink their design. In this talk, I will present my approach to address a prominent performance bottleneck for these bulk data transfers: Lack of sufficient sources of data to download from. My work addresses this challenge by (1) exploiting network peers that serve files similar to the file being downloaded, and (2) by coupling all the available network resources with similar data on the local disk of a receiver. My talk will also highlight the system design and implementation for the above solutions. For example, I will discuss handprinting, a novel and efficient algorithmic technique to locate the additional similar network peers with only a constant overhead. Finally, a transfer system that simultaneously benefits from disk and network is required to work well across a diverse range of operating environments and scenarios resulting from varying network and disk performance. I will present the design principles for an all-weather transfer system that adapts to a wide spectrum of operating conditions by monitoring resource availability.

Speaker's bio:

Himabindu Pucha is currently a post-doctoral fellow in the Computer Science Department at Carnegie Mellon University. She received her doctorate in December 2007 and her Masters degree in 2003 from the Electrical and Computer Engineering Department at Purdue University. Her research interests span distributed systems, computer networks, and mobile computing. She is an ACM Student Research Competition finalist this year and a recipient of the Google Anita Borg Scholarship and the Purdue Violet Haas award.



Demystifying Internet Traffic
Kashi Vishwanath | University of California, San Diego

2008-04-09, 14:00 - 15:00
Saarbrücken building E1 5, room 019

Abstract:

The Internet has seen a tremendous growth since its inception four decades ago. With its increasing importance, there has been a growing emphasis on improving the reliability of the infrastructure. One approach to delivering such reliability is for design engineers, network administrators and researchers to stress test potential solutions against a wide variety of deployment scenarios. For instance, web hosting services would wish to ensure that they can deliver target levels of performance and availability under a range of conditions. Similarly, Internet Service Providers (ISPs) would benefit from understanding future growth in traffic demands at individual routers in its network as a function of emerging applications and expanding user base.

I argue that one of the key ingredients required to carry out such studies is a deep understanding of Internet traffic characteristics. This talk will try to uncover some of the mysteries surrounding Internet traffic, including its rich structure. I will thus describe the principles and key insights that led to the development of the Swing traffic generator. Swing is the first tool to reproduce realistic and responsive Internet-like traffic in a testbed. Starting from observing packets across a given link, Swing automatically extracts parameters for its detailed multi-level model. It then uses this model to generate live traffic that looks qualitatively similar to the original traffic. More interestingly, Swing provides the user with meaningful knobs to project traffic demands into the future. This includes changing assumptions about user popularity of applications, planned upgrades to the network as well as change in the semantics of applications.

Speaker's bio:

Kashi V. Vishwanath received his B.Tech. from the Indian Institute of Technology, Bombay in Computer Science in 2001. He will receive his PhD under the supervision of Prof. Amin Vahdat from the University of California, San Diego in June 2008. Kashi's main research interests are in systems and networking with an emphasis on enabling testing and validation of large-scale systems and networked services in laboratory settings. He received the best student paper award at ACM SIGCOMM 2007. Kashi Vishwanath is a faculty candidate.



Practical Type Inference for first-class Polymorphism
Dimitrios Vytiniotis | University of Pennsylvania

2008-04-07, 15:00 - 16:00
Saarbrücken building E1 5, room Rotunda

Abstract:

Type inference is a key component of modern, statically typed, functional programming languages, such as Caml and Haskell. It allows programmers to omit many excessive---and in some cases all---type annotations from programs.

A different key component of modern programming languages is polymorphism. However, languages with polymorphism typically have ad-hoc restrictions on where and what kind of polymorphic types may occur. Supporting ``first-class'' polymorphism, by lifting those restrictions, is obviously desirable, but it is hard to achieve without sacrificing type inference.

In this talk I will explain the difficulties with type inference for first-class polymorphism, give a historic roadmap of the research on this topic, and present a new type system for first-class polymorphism that improves on earlier proposals: it is an extension of ML type inference; it has a simple, declarative specification; typeability is robust to program transformations; and the specification enjoys a sound, complete and decidable type inference algorithm.

This is joint work with Stephanie Weirich and Simon Peyton Jones.

Speaker's bio:

Dimitrios Vytiniotis is a PhD candidate in Programming Languages at the University of Pennsylvania, and is working under the supervision of Stephanie Weirich. His research interests include programming languages theory and implementation, type system design, semantics of programming languages, automated theorem proving, and formal methods. Dimitrios holds an Electrical Engineering diploma from National Technical University of Athens, Greece, and has worked as a software engineer before arriving at Penn.



Scalability in Computer Games and Virtual Worlds
Johannes Gehrke | Cornell University

2008-03-20, 16:00 - 17:00
Saarbrücken building E1 5, room 019

Abstract:

Computer games and virtual worlds present the next frontier in digital entertainment and social interaction. An important aspect of computer games is the artificial intelligence (AI) of non-player characters. To create interesting AI in games today, we can create complex, dynamic behavior for a very small number of characters, but neither the game engines nor the style of AI programming enables intelligent behavior that scales to a very large number of non-player characters. I will talk about modeling game AI as a data management problem, providing a scalable framework for games with a huge number of non-player characters. I will also discuss applications of this idea to crowd simulations. I will conclude with scalability challenges for Massively Multiplayer Online Games and collaborative environments.

This talk describes joint work with Alan Demers, Christoph Koch, and Walker White.

Speaker's bio:

Johannes Gehrke is Chief Scientist at Fast Search and Transfer and Associate Professor in the Department of Computer Science at Cornell University. Johannes' research interests are in the areas of data management, search, and distributed systems. He has received a National Science Foundation Career Award, an Arthur P. Sloan Fellowship, an IBM Faculty Award, the Cornell College of Engineering James and Mary Tien Excellence in Teaching Award, and the Cornell University Provost's Award for Distinguished Scholarship. He co-authored the undergraduate textbook Database Management Systems (McGrawHill (2002), currently in its third edition), used at universities all over the world. Johannes was Program co-Chair of the 2004 ACM International Conference on Knowledge Discovery and Data Mining (KDD 2004) and Program Chair of the 33rd International Conference on Very Large Data Bases (VLDB 2007).



Intuitive Global Connectivity for Personal Mobile Devices
Bryan Ford | MIT

2008-03-06, 16:00 - 17:00
Saarbrücken building E1 5, room 019

Abstract:

Network-enabled mobile devices are quickly becoming ubiquitous in the lives of ordinary people, but current technologies for providing ubiquitous global *connectivity* between these devices still require experts to set up and manage. Users must allocate and maintain global domain names in order to connect to their devices globally via DNS, they must allocate a static IP address and run a home server to use Mobile IP or set up a virtual private network, they must configure firewalls to permit desired remote access traffic while filtering potentially malicious traffic from unknown parties, and so on. This model of "management by experts" works for organizations with administrative staff, but is infeasible for most consumers who wish to set up and manage their own personal networks.

The Unmanaged Internet Architecture (UIA) is a suite of design principles and experimental protocols that provide robust, efficient global connectivity among mobile devices while relying for configuration only on simple, intuitive management concepts. UIA uses "personal names" rather than traditional global names as handles for accessing personal devices remotely. Users assign these personal names via an ad hoc device introduction process requiring no central allocation. Once assigned, personal names bind securely to the global identities of their target devices independent of network location. Each user manages one namespace, shared among all the user's devices and always available on each device. Users can also name other users to share resources with trusted acquaintances. Devices with naming relationships automatically arrange connectivity when possible, both in ad hoc networks and using global infrastructure when available. We built a prototype implementation of UIA that demonstrates the utility and feasibility of these design principles. The prototype includes an overlay routing layer that leverages the user's social network to provide robust connectivity in spite of network failures and asymmetries such as NATs, a new transport protocol implementing a novel stream abstraction that more effectively supports the highly parallelized and media-oriented applications demanded on mobile devices, and a flexible security framework based on proof-carrying authorization (PCA) that provides "plug-in" interoperability with existing secure naming and authentication systems.

Bryan Ford is a faculty candidate

Speaker's bio:

-



Matthew Might | Georgia Institute of Technology

2008-03-03, 16:00 - 17:00
Saarbrücken building E1 5, room 019

Abstract:

The expressive power of functional and object-oriented languages derives in part from their higher-orderness: through closures and objects, code becomes data. This talk focuses on meeting the challenges that such power poses for static analysis and its clients. (To make the talk more accessible, a brief history of the higher-order analysis is included.)

Since its discovery in the 1980s, higher-order control-flow analysis (CFA) has enabled many critical program optimizations, such as flow-directed inlining and static virtual-method resolution. Over the past two decades, much research in higher-order analysis focused on improving the speed and precision of CFA. Despite frequent encounters with the limits of CFAs, little progress had been made in moving beyond them, as measured by the kinds of optimizations made possible and the kinds of questions made answerable.

The key limitation of CFAs is the inherently coarse approximation that they inflict upon environment structure. This talk centers on my development of environment analysis---techniques which break through these limitations of twenty years standing. Of particular note is that these techniques evade the cost/precision tradeoff usually found in program analyses: compared to previous techniques, they provide improvements in both power and precision, yet also reduce the cost of the compile-time analysis in practice. Using environment analysis as a foundation, my recent work on logic-flow analysis has continued to expand the reach of higher-order analysis beyond optimization and into realms such as security and verification.

Matthew Might is a faculty candidate



Speaker's bio:

-



Petr Kuznetsov | Max Planck Institute for Software Systems

2008-02-29, 11:00 - 12:00
Saarbrücken building E1 5, room 019

Abstract:



Making the right model assumptions is crucial in developing robust and efficient computing systems. The ability of a model to solve distributed computing problems is primarily defined by the /synchrony assumptions/ the model makes. Given that many problems in distributed computing are impossible to solve in the asynchronous way, it is very important to determine the minimal synchrony assumptions that are sufficient to solve a given problem. These assumptions can be conveniently encapsulated in the /weakest failure detector/ abstraction.

In this talk, I will focus on defining the "weakest failure detector ever": the failure detector that is strong enough to circumvent /some/ asynchronous impossibility and, at the same time, necessary to circumvent /any/ asynchronous impossibility. In this context, I will consider the /geometrical/ approach, based on modeling a system state of as a high-dimensional geometrical object, and a distributed computation as an evolution of this object in space. This approach has been shown instrumental in characterizing the class of tasks solvable in /asynchronous/ systems. I will argue that applying these ideas to /partially synchronous/ systems may lead to automatic derivations of the weakest failure detectors for various distributed computing problems, and, eventually, to establishing a theory of distributed computational complexity.

Speaker's bio:

-



Scaling Internet Routing with Legacy Protocols
Paul Francis | Cornell University

2008-01-18, 14:00 - 15:00
Kaiserslautern building G26, room 57/210

Abstract:

The large and constantly growing Internet routing table size is a longstanding problem that leads to increased convergence time, increased boot time, and costly equipment upgrades. The problem exists for both VPN and global routing tables, and there is concern that IPv4 address space exhaustion over the next few years may lead to an increasingly fragmented address space, poor aggregation, and therefore a increase in the rate of routing table size. To address these issues, the IETF is working hard on new protocols that will shrink routing tables. In this talk, we present a way to shrink routing tables, easily by an order of magnitude or more, without any new protocols. The idea behind our approach, called Virtual Aggregation, is to partition the address space into large Virtual Prefixes, each of which is delegated to a tunneled virtual network composed of a fraction of ISP routers. Virtual Aggregation can be used independently by a single ISP, or cooperatively among a group of ISPs. This talk describes how Virtual Aggregation can be configured and deployed, and gives performance results based on measurements made at a Tier-I ISP.

Speaker's bio:



Paul has been a researcher in computer networking for going on 20 years now, in such organizations as MITRE, Bellcore, NTT Software Labs, and ACIRI. Within computer networking, Paul's work has centered on routing and addressing, with a particular liking for problems having to do with large and self-configuring networks. Work in this vein extends from Landmark Routing, done in the late 80's, through Yoid end-system (overlay) multicast (late 90's), to recent work on unstructured P2P networks and more scalable end-system multicast. Notoriously, Paul is the inventer of NAT (demonstrating originality, if not prognosticative ability, judging from his bank account). Other innovations of Paul's include shared-tree multicast, IDMaps host proximity service, shortcut routing (through large non-broadcast subnetworks), and the multiple-addresses approach to site multi-homing, which is the basis for scalable routing in IPv6. Paul joined Cornell University in 2002, where he has worked on IP anycast services, new network management architectures, BGP scalability, overlay multicast, random node selection in P2P networks, new transport protocols, E2E approaches to DoS and worm prevention, and new naming and addressing architectures for the Internet.





Juan Navarro Perez | University of Manchester

2007-12-20, 14:00 - 15:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:



We present an encoding that is able to specify LTL bounded model checking problems within the Bernays-Schönfinkel fragment of first-order logic. This fragment, which also corresponds to the category of effectively propositional problems (EPR) of the CASC system competitions, allows a natural and succinct representation of both a software/hardware system and the property that one wants to verify.

This is part of the research I did during my PhD studies at the University of Manchester, which deals with finding problems suitable for encoding within the effectively propositional class of formulas, and aims to encourage the interest on theorem proving in this restricted fragment of first order logic.

Speaker's bio:

-



Meeyoung Cha | KAIST

2007-12-13, 09:00 - 10:00
Saarbrücken building E1 5, room 019

Abstract:

Multimedia streaming is becoming an integral part of people's life.  In this talk, I will present data analysis on user's viewing behaviors based on two streaming services: YouTube and IPTV.  YouTube is the largest VoD service for user generated contents (UGC).  Nowadays, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities.  In this talk, I will present the intrinsic statistical properties of video popularity based on real traces from YouTube. Understanding the popularity characteristics is important because it can bring forward the latent demand created by bottlenecks in the system (e.g. poor search and recommendation engines, lack of metadata, etc).

Another popular form of streaming is IPTV.  After many years of academic and industry work, we are finally witnessing a thriving of IP multicast in large scale deployment of IPTV services.  In the second part of this talk, I will present for the first time previously hidden TV viewing habits based on real traces.  I will also discuss the feasibility of using peer-to-peer distribution for scalable IPTV system and supporting advanced viewing controls such as DVD-like functionalities, content recommendations, and target advertisements.

Speaker's bio:

Meeyoung Cha is a PhD candidate in Computer Science at KAIST, Korea.  Her advisor is Dr. Sue Moon.  She is working on the network design and support for multimedia streaming services.  Previously, she was an intern at AT&T Labs Research in NJ, where she participated in the cost comparison of IPTV backbone designs.  Recently, she was an intern at Telefonica Research in Barcelona, Spain, and in University of Cambridge, UK, where she analyzed a nationwide IPTV system and the world's largest VoD for user-generated contents, YouTube.   She also maintains interests in path diversity issues in intra- and inter-domain routing.  She expects to graduate in Feb 2008.



From Lowenheim to PSL
Moshe Vardi | Rice University

2007-12-12, 17:00 - 18:00
Kaiserslautern building G26, room building 42, lecture hall 110

Abstract:

One of the surprising developments in the area of program verification is how several ideas introduced by logicians in the first part of the 20th century ended up yielding at the start of the 21st century an industrial-standard property-specification language called PSL. This development was enabled by the equally unlikely transformation of the mathematical machinery of automata on infinite words, introduced in the early 1960s for second-order arithmetics, into effective algorithms for model-checking tools. This talk attempts to trace the tangled threads of this development.

Speaker's bio:

Moshe Y. Vardi is the George Professor in Computational Engineering and Director of the Computer and Information Technology Institute at Rice University. He chaired the Computer Science Department at Rice University from January 1994 till June 2002. Prior to joining Rice in 1993, he was at the IBM Almaden Research Center, where he managed the Mathematics and Related Computer Science Department. His research interests include database systems, computational-complexity theory, multi-agent systems, and design specification and verification. Vardi received his Ph.D. from the Hebrew University of Jerusalem in 1981. He is the author and co-author of over 300 technical papers, as well as two books, "Reasoning about Knowledge" and "Finite Model Theory and Its Applications", and the editor of several collections.

Vardi is the recipient of three IBM Outstanding Innovation Awards, a co-winner of the 2000 Goedel Prize, a co-winner of the 2005 ACM Paris Kanellakis Award for Theory and Practice, and a co-winner of the LICS 2006 Test-of-Time Award. He holds honorary doctorates from the University of Saarland, Germany, and the University of Orleans, France. Vardi is an editor of several international journals and the president of the International Federation of Computational Logicians. He is Guggenheim Fellow, as well as a Fellow of the Association of Computing Machinery, the American Association for the Advancement of Science, and the American Association for Artificial Intelligence. He was designated Highly Cited Researcher by the Institute for Scientific Information, and was elected as a member of the US National Academy of Engineering, the European Academy of Sciences, and the Academia Europea. He recently co-chaired the ACM Task Force on Job Migration



Reliable and efficient programming abstractions for sensor networks
Ramki Gummadi | MIT

2007-11-30, 23:00 - 00:00
Saarbrücken building E1 5, room 024

Abstract:

It is currently difficult to build practical and reliable programming systems out of distributed and resource-constrained sensor devices. The state of the art in today's sensornet programming is centered around nesC. nesC is a nodelevel language---a program is written for an individual node in the network---and nesC programs use the services of the TinyOS operating system. In this talk, I will describe an alternate approach to programming sensor networks that significantly raises the level of abstraction over today's practice. The critical change is one of perspective: rather than writing programs from the point of view of an individual node, programmers implement a central program that conceptually has access to the entire network. This approach pushes to the compiler the task of producing node-level programs that implement the desired behavior.

I will present the Pleiades programming language, its compiler, and its runtime. The Pleiades language extends the C language with constructs that allow programmers to name and access node-local state within the network and to specify simple forms of concurrent execution. The compiler and runtime system cooperate to implement Pleiades programs efficiently and reliably. First, the compiler employs a novel program analysis to translate Pleiades programs into message-efficient units of work implemented in nesC. The Pleiades runtime system orchestrates execution of these units, using TinyOS services, across a network of sensor nodes. Second, the compiler and runtime system employ novel locking, deadlock detection, and deadlock recovery algorithms that guarantee serializability in the face of concurrent execution. We illustrate the readability, reliability and efficiency benefits of the Pleiades language through detailed experiments, and demonstrate that the Pleiades implementation of a realistic application performs similar to a hand-coded nesC version that contains more than ten times as much code.

Speaker's bio:

Ramki Gummadi is currently a post-doc at MIT with Prof. Hari Balakrishnan. He is interested in all aspects of Systems and Networking, with a particular emphasis on wireless networks, building and measuring Internet-scale systems, and programming methodologies for simplifying the construction of such large-scale concurrent systems. He received his B.Tech. from IIT-Madras in 1999, his M.S. from UC Berkeley in 2002, and Ph.D. from USC in 2007. He was awarded a UC Berkeley Regents Fellowship and an ACM Student Research Competition award.



Baltic: Service Combinators for Farming Virtual Machines
Andrew D. Gordon | Microsoft Research

2007-11-20, 14:00 - 15:00
Kaiserslautern building G26, room bldg. 57, rotunda

Abstract:

Based on joint work with Karthikeyan Bhargavan, Microsoft Research Iman Narasamdya, University of Manchester

We consider the problem of managing server farms largely automatically, in software. Automated management is gaining in importance with the widespread adoption of virtualization technologies, which allow multiple virtual machines per physical host. We consider the case where each server is service-oriented, in the sense that the services it provides, and the external services it depends upon, are explicitly described in metadata. We describe the design, implementation, and formal semantics of a library of combinators whose types record and respect server metadata. Our implementation consists of a typed functional script built with our combinators, in control of a Virtual Machine Monitor hosting a set of virtual machines. Our combinators support a range of operations including creation of virtual machines, their interconnection using typed endpoints, and the creation of intermediaries for tasks such as load balancing. Our combinators also allow provision and reconfiguration in response to events such as machine failures or spikes in demand. We describe a series of programming examples run on our implementation, based on existing server code for order processing, a typical data centre workload. To obtain a formal semantics for any script using our combinators, we provide an alternative implementation of our interface using a small concurrency library. Hence, the behaviour of the script plus our libraries can be interpreted within a statically typed process calculus. Assuming that server metadata is correct, a benefit of typing is that various server configuration errors are detected statically, rather than sometime during the execution of the script.

Speaker's bio:

Andrew D. Gordon is a Principal Researcher at Microsoft Research, Cambridge. Before joining Microsoft in 1997, Gordon was a Royal Society University Research Fellow at the University of Cambridge Computer Laboratory. He holds degrees in Computer Science from the University of Edinburgh and the University of Cambridge. As a postdoc, he was a member of the Programming Methodology Group at Chalmers University in Gothenburg. Gordon's research interests are in the general area of computer programming languages. He is the co-inventor of two influential process calculi: the spi calculus (with M. Abadi) and the ambient calculus (with L. Cardelli). His recent work focuses on applying type theory and other formal techniques to problems of computer security. For example, the Samoa Project (with K. Bhargavan and C. Fournet) is developing formal tools for the security of XML Web Services.



Transforming RTL Design to Counter Automaton
Ales Smrcka | Brno University of Technology

2007-11-19, 14:00 - 14:00
Saarbrücken building E1 5, room Rotunda 6th floor

Abstract:

Abstract: The languages for the description of a hardware design in RTL level (e.g., VHDL or Verilog) allow to specify a generic description. Such a description contains one or more parameters which make the generic nature of a design (e.g, the number of items in a buffer or the width of inputs and output of arithmetic operation). A new approach to formal verification of generic hardware designs will be presented. The proposed approach is based on a translation of such designs to counter automata. These main topics of modelling and the verification will be discussed in more details: the translation of VHDL constructs to counter automata, specification of an environment of modelled hardware component, specification of safety properties over a design, and some experimental results with ARMC and Blast.

Speaker's bio:

-



Title: Can Concurrent Software Ever Be Quality Software?
Edward A. Lee | UC Berkeley

2007-11-09, 14:00 - 15:00
Kaiserslautern building G26, room rotunda bldg. 57

Abstract:



The most widely used concurrent software techniques, which are based on threads, monitors (or approximations to monitors), and semaphores, yield incomprehensible and untestable software. Bugs due to race conditions, timing unpredictability, and potential deadlocks can go undetected for a very long time. Unexpected interactions between even loosely coupled software components can destabilize systems. Yet increased parallelism in general-purpose computing (particularly multicore systems), multi-threaded languages such as Java and C#, increased networking in embedded computing, and a growing diversity of attached hardware requiring specialized device drivers mean that a much greater fraction of software is concurrent. Software designers are poorly equipped for this. They use threads because syntactically, threads change almost nothing. They only later discover that semantically, threads change everything. By then it is too late. Yet there is no shortage of theory; there is a mature community with sound, well-developed techniques that the mainstream largely ignores. How can we change that? In this talk, I will make a case that composition languages, which describe program structure only, can be coupled with concurrent models of computation and conventional imperative languages to form a powerful troika. Such heterogeneous combinations of languages do have a chance for acceptance, and in certain niche situations, have already achieved a measure of acceptance

Speaker's bio:

Edward A. Lee is the Robert S. Pepper Distinguished Professor and Chair of the Electrical Engineering and Computer Sciences (EECS) department at U.C. Berkeley. His research interests center on design, modeling, and simulation of embedded, real-time computational systems. He is a director of Chess, the Berkeley Center for Hybrid and Embedded Software Systems, and is the director of the Berkeley Ptolemy project. He is co-author of five books and numerous papers. He has led the development of several influential open-source software packages, including Ptolemy, Ptolemy II, HyVisual, and VisualSense. His bachelors degree (B.S.) is from Yale University (1979), his masters (S.M.) from MIT (1981), and his Ph.D. from U. C. Berkeley (1986). From 1979 to 1982 he was a member of technical staff at Bell Telephone Laboratories in Holmdel, New Jersey, in the Advanced Data Communications Laboratory. He is a co-founder of BDTI, Inc., where he is currently a Senior Technical Advisor, and has consulted for a number of other companies. He is a Fellow of the IEEE, was an NSF Presidential Young Investigator, and won the 1997 Frederick Emmons Terman Award for Engineering Education.



Stable Internet Routing Without Global Coordination
Jennifer Rexford | Princeton University

2007-10-29, 14:00 - 14:00
Saarbrücken building E1 5, room 019

Abstract:



The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability. This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet's hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. The guidelines conform to conventional traffic-engineering practices of ISPs, and provide each AS with significant flexibility in selecting its local policies. Furthermore, the guidelines ensure route convergence even under changes in the topology and routing policies. Drawing on a formal model of BGP, we prove that following our proposed policy guidelines guarantees route convergence. We also describe how our methodology can be applied to new types of relationships between ASes, how to verify the hierarchical AS relationships, and how to realize our policy guidelines. Our approach has significant practical value since it preserves the ability of each AS to apply complex local policies without divulging its BGP configurations to others. The end of the talk briefly summarizes follow-up studies that have built on this work.

Speaker's bio:

Jennifer Rexford joined the Network Systems Group of the Computer Science Departmentat Princeton University in February 2005 after eight and a half years at AT&T Research. Her research focuses on Internet routing, network measurement,and network management, with the larger goal of making data networks easier to design, understand, and manage. Jennifer is co-author of the book Web Protocols and Practice: HTTP/1.1, Networking Protocols, Caching, and Traffic Measurement (Addison-Wesley, May 2001) and co-editor of She's an Engineer? Princeton Alumnae Reflect (Princeton University, 1993). Jennifer serves as the chair of ACM SIGCOMM, and as a member of the CRA Board of Directors. She received her BSE degree in electrical engineering from Princeton University in 1991, and her MSE and PhD degrees in computer science and electrical engineering from the U. Michigan in 1993 and 1996, respectively. She was the winner of ACM's Grace Murray Hopper Award for outstanding young computer professional of the year for 2004.



Formal Models for Side-Channel Attacks
Boris Köpf | ETH Zuerich

2007-10-24, 10:30 - 11:30
Saarbrücken building E1 5, room 023

Abstract:

Side-channel attacks have become so effective that they pose a real threat to the security of cryptographic algorithms. This threat is not covered by traditional notions of cryptographic security and models for proving resistance against it are only now emerging. In this talk, I will present work on such a model. It is based on concrete and realistic assumptions about the attacker and it is tailored to synchronous hardware, where faithful system models are available. The model leads to meaningful metrics for assessing the resistance of a system to side-channel attacks. I will show how these metrics can be computed and be used for analyzing nontrivial hardware implementations for their vulnerability to timing attacks. I will conclude with a number of directions for further research.

Speaker's bio:

Boris Koepf is a Ph.D. candidate at the Swiss Federal Institute of Technology (ETH). His interests include information security, program analysis and information flow, programming languages, and formal methods.



Technology for Developing Regions
Eric Brewer | UC Berkeley

2007-09-28, 11:00 - 12:00
Kaiserslautern building G26, room bldg. 42, HS 110

Abstract:

Moore's Law and the wave of technologies it enabled have led to tremendous improvements in productivity and the quality of life in industrialized nations. Yet, technology has had almost no effect on the other five billion people on our planet. In this talk I argue that decreasing costs of computing and wireless networking make this the right time to spread the benefits of technology, and that the biggest missing piece is a lack of focus on the problems that matter. After covering some example applications that have shown very high impact, I present some our own results, including the use of novel low-cost telemedicine to improve the vision of real people, with over 20,000 patients examined so far. I conclude with some discussion on the role of EECS researchers in this new area.

Speaker's bio:

Dr. Brewer focuses on all aspects of Internet-based systems, including technology, strategy, and government. As a researcher, he has led projects on scalable servers, search engines, network infrastructure, sensor networks, and security. His current focus in (high) technology for developing regions, with projects in India, Ghana, Rwanda, Uganda among others, and including health care, education, and connectivity.

In 1996, he co-founded Inktomi Corporation with a Berkeley grad student based on their research prototype, and helped lead it onto the Nasdaq 100 before it was bought by Yahoo! in March 2003. In 2000, he founded the Federal Search Foundation, a 501-3(c) organization, which created the official US government portal with President Clinton, www.FirstGov.gov (now www.usa.gov).

He was named a "Global Leader for Tomorrow" by the World Economic Forum, by the Industry Standard as the "most influential person on the architecture of the Internet", and by Forbes as one of their 12 "e-mavericks", for which he appeared on the cover.



Structural Abstraction of Software Verification Conditions--
Domagoj Babic, UBC. | U of British Columbia

2007-09-27, 15:00 - 16:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:

Precise software analysis and verification require tracking the exact path along which a statement is executed (path-sensitivity), the different contexts from which a function is called (context-sensitivity), and the bit-accurate operations performed. Previously, verification with such precision has been considered too inefficient to scale to large software. In this talk, Domagoj presents a novel approach to solving such verification conditions, based on an automatic abstraction-checking-refinement framework that exploits natural abstraction boundaries present in software. Experimental results show that our approach scales to over 200,000 lines of real C code.

Speaker's bio:



Domagoj Babic is a Ph.D. candidate at the University of British Columbia. His research interests include software verification and analysis, as well as automated theorem proving and SAT solving. Domagoj is currently working on a scalable and precise static checker Calysto, and a bit-vector decision procedure Spear for software verification. He is planning to graduate in early spring 2008.

Domagoj holds a M.Sc. degree in computer science and a Dipl.Ing. degree in industrial electronics from the Faculty of Electrical Engineering and Computing at Zagreb University.

For more information, see http://www.domagoj.info/



The Vision and Reality of Ubiquitous Computing
Henning Schulzrinne, | Columbia University

2007-09-19, 15:00 - 15:00
Kaiserslautern building G26, room bldg. 57, rotunda

Abstract:

Simultaneous videocast: Campus of Saarland University, Building E 1.4, room 024 (MPI building)



About ten years ago, the notion of ubiquitous computing first appeared, just as the first truly mobile devices became available. Ten years later, the notion of ubiquitous computing as integrating computing into the environment has not quite panned out, with the emphasis shifting to personal, mobile devices. In this talk, I will try to illustrate some of the user-focused challenges that derive from the goals of ubiquitous computing. We have also started to address some aspects of the problem with our work on service and session mobility, as well as attempts to offer core network services, such as email and web access, in partially disconnected environments.



Speaker's bio:

Prof. Henning Schulzrinne received his Ph.D. from the University of Massachusetts in Amherst, Massachusetts. He was a member of technical staff at AT&T Bell Laboratories, Murray Hill and an associate department head at GMD-Fokus (Berlin), before joining the Computer Science and Electrical Engineering departments at Columbia University, New York. He is currently chair of the Department of Computer Science.

Protocols co-developed by him, such as RTP, RTSP and SIP, are now Internet standards, used by almost all Internet telephony and multimedia applications. His research interests include Internet multimedia systems, ubiquitous computing, mobile systems, quality of service, and performance evaluation. He is a Fellow of the IEEE.





Loss and Delay Accountability for the Internet
Katerina Argyraki | EPFL, Switzerland

2007-09-12, 11:00 - 12:00
Saarbrücken building E1 5, room rotunda 6th floor

Abstract:



The Internet provides no information on the fate of transmitted packets, and end systems cannot determine who is responsible for dropping or delaying their traffic. As a result, they cannot verify that their ISPs are honoring their service level agreements, nor can they react to adverse network conditions appropriately. While current probing tools provide some assistance in this regard, they only give feedback on probes, not actual traffic. Moreover, service providers could, at any time, render their network opaque to such tools.

I will present AudIt, an explicit "accountability interface" for the Internet, through which ISPs can pro-actively supply feedback to traffic sources on loss and delay, at administrative-domain granularity. AudIt benefits not only end systems, but also ISPs, because---in contrast to probing tools---it allows them to control the amount and quality of information revealed about their internal structure and policy. I will show that AudIt is resistant to ISP lies in a business-sensible threat model and can be implemented with a modest NetFlow modification. Finally, I will discuss a Click-based prototype, which introduced less than 2% bandwidth overhead on real traces from a Tier-1 ISP.

Speaker's bio:



Katerina Argyraki obtained her PhD in Electrical Engineering from Stanford University in 2006 and is currently a research scientist at EPFL, Switzerland. She did her PhD studies under the guidance of Prof. David Cheriton in the Distributed Systems Group, where she worked on the TRIAD project and developed AITF---a network-based solution to bandwidth flooding. She also weaved in a few startup stints---a summer at Kealia (now part of Sun), another one at BlueArc, and, finally, a year at Arastra, before joining EPFL in 2007. Her research interests lie in the areas of network architecture and protocols with a focus on denial-of-service defenses and accountability solutions.





Lifetime Driven MAC Protocols for Ad Hoc Networking
Prof.Siva Ram Murthy | IIT Madras

2007-05-25, 14:00 - 14:00
Saarbrücken building E1 4, room 021

Abstract:

In the last few years, there has been a big interest in Ad Hoc Wireless Networks as they have tremendous military and commercial potential. An Ad Hoc Wireless Network is a wireless network, comprising of mobile nodes (which can also serve as routers) that use wireless transmission, having no infrastructure (central administration such as a Base Station in a Cellular Wireless Network or an Access Point in a Wireless LAN). Ad Hoc Wireless Networks can be set up anywhere and anytime as they eliminate the complexities of infrastructure setup. Ad Hoc wireless Networks find applications in several areas. Some of these include: military applications (establishing communication among a group of soldiers for tactical operations as setting up of a fixed infrastructure in enemy territories or in inhospitable terrains may not be possible), collaborative and distributed computing, emergency operations, wireless mesh networks, wireless sensor networks, and hybrid (integrated Cellular and Ad hoc) wireless networks. In this talk, I first present a brief overview of the major issues that influence the design and performance of Ad Hoc Wireless Networks. As the performance of any wireless network hinges on the Medium Access Control (MAC) protocol, more so for Ad Hoc Wireless Networks, I present, in detail, novel distributed homogeneous and heterogeneous battery aware MAC protocols, which take benefit of the chemical properties of the batteries and their characteristics, to provide fair node scheduling and increased network and node lifetime through uniform discharge of batteries.

Speaker's bio:

Siva Ram Murthy received his Ph.D. degree in Computer Science from the Indian Institute of Science, Bangalore and is currently a Professor in the  Department of Computer Science and Engineering at IIT Madras, India.

He is the co-author of the textbooks Resource Management in Real-time Systems and Networks, (MIT Press, Cambridge, Massachusetts, USA), WDM Optical Networks: Concepts, Design, and Algorithms, (Prentice Hall, Upper Saddle River, New Jersey, USA), and Ad Hoc Wireless Networks: Architectures and Protocols, (Prentice Hall, Upper Saddle River, New Jersey, USA).

Dr.Murthy is a Fellow of the Indian National Academy of Engineering, an Associate Editor of IEEE Transactions on Computers, and a Subject Area Editor of Journal of Parallel and Distributed Computing.



Abstraction for Liveness and Safety
Andrey Rybalchenko | Ecole Polytechnique Fédérale de Lausanne and Max Planck Institute for Computer Science

2007-04-26, 15:00 - 15:00
Saarbrücken building E1 4, room 019

Abstract:

We present new approaches to verification of liveness and safety properties of software. Proving liveness properties of programs is central to the process of ensuring that software systems can always react. Verification of liveness properties had been an open problem since 1970s, due to the lack of modular termination arguments and adequate abstraction techniques. We highlight our experience in developing the theoretical foundations for the first software verification tool for termination that provides capacity for large program fragments (of more than 20,000 lines of code) together with support for programming language features such as arbitrarily nested loops, pointers, function-pointers, side-effects, etc. We also describe our experience in applying the tool on device driver dispatch routines from the Windows operating system. In the second part of the talk, we will focus on abstraction techniques that are at heart of state-of-the-art verification tools for safety. We address their limitations, which severely restrict the practical applicability. We propose a new approach for finding abstraction of a program that overcomes the inherent limitations of current abstraction refinement schemes.

Speaker's bio:

Andrey Rybalchenko is researcher at Max Planck Institute for Computer Science in Saarbruecken and at Ecole Polytechnique Federale de Lausanne. He holds Dipl.-Inf. (2002) and Dr.-Ing. (summa cum laude, 2005) degrees from the University of Saarland, Germany. Andrey's research interests focus on automated methods and tools for formal software verification, ranging from the design of program analysis methods to the development of algorithms for symbolic computation and automated deduction. Andrey's doctoral research revolutionized verification of liveness properties for software systems by introducing ``transition invariants''. Jointly with Microsoft Research, Andrey developed the Terminator tool, which is the first tool to perform automatic verification of liveness properties for software. He is also developing the ARMC tool for automatically proving safety properties of complex infinite state systems, which has been successfully applied for the verification of safety critical parts of the European Train Control System. Andrey is a recipient of Guenther Hotz medal (2002) from the University of Saarland and Otto Hahn medal (2005) from the Max Planck Society.



Feedback-directed random test generation
Michael D. Ernst | MIT

2007-04-24, 16:00 - 16:00
Saarbrücken building E1 3 - Hörsaal Gebäude, room HS 003

Abstract:



We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected.

Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation.

Speaker's bio:

Michael D. Ernst is an Associate Professor in the MIT Department of Electrical Engineering & Computer Science, and is a member of MIT's Computer Science & Artificial Intelligence Lab (CSAIL).

His research aims to make software more reliable, more secure, and easier (and more fun!) to produce. His technical interests are primarily in software engineering, including static and dynamic program analysis, testing, security, type theory, programming language design, and verification.

Ernst was previously a lecturer in the Rice University computer science department and a researcher at Microsoft Research. He holds a Ph.D. in Computer Science and Engineering from the University of Washington.



Expanding and Exploiting the Expressive Power of Modules
Derek Dreyer | Toyota Technological Institute at Chicago

2007-04-19, 15:00 - 15:00
Saarbrücken building E1 4, room 019

Abstract:

Modularity is widely viewed as a central concept in the design of robust software. Programming languages vary widely, however, in the kinds of modularity mechanisms they support, leading to the mistaken impression that there are fundamental tradeoffs between different paradigms of modular programming. The high-level goal of my research is to design languages that overcome these tradeoffs and combine the benefits of existing mechanisms.

In this talk, I will describe my work on using the ML module system as a basis for developing next-generation module languages. ML provides an excellent starting point because of its powerful support for both data abstraction (implementor-side modularity) and generic programming (client-side modularity). Nevertheless, there are ways in which ML's module language is unnecessarily inflexible and in which the expressive power of ML modules has not been exploited to its full potential.

In the first part of the talk, I will compare the support for generic programming offered by ML modules with that offered by Haskell's "type classes". Modules emphasize explicit program configuration and namespace control, whereas type classes emphasize implicit program configuration and overloading. In fact, I show that it is possible to support both styles of programming within one language, by exploiting the expressive power of ML modules and encoding type classes directly in terms of existing ML module mechanisms.

In the second part of the talk, I will discuss the problem of extending ML with one of the most frequently requested features---recursive modules. The lack of support for this feature is a key stumbling block in incorporating ML-style modules into object-oriented languages, in which recursive components are commonplace. The primary difficulty with recursive modules is something called the "double vision" problem, which concerns the interaction of recursion and data abstraction. I trace the problem back to a deficiency in the classical type-theoretic account of data abstraction (namely, existential types), and I show how a novel interpretation of data abstraction as a computational effect leads to an elegant solution.

Speaker's bio:

Derek Dreyer received a B.A. in Mathematics and Computer Science from New York University in 1996, and a Ph.D. in Computer Science from Carnegie Mellon University in 2005. Since January 2005, he has held the position of Research Assistant Professor at the Toyota Technological Institute at Chicago. His research interests include the design, implementation, and type-theoretic foundations of HOT (Higher-Order, Typed) programming languages, such as ML and Haskell, with particular focus on type systems for modular programming.



Enabling what-if explorations in distributed systems
Eno Thereska | Carnegie Mellon University

2007-04-16, 15:00 - 15:00
Saarbrücken building E1 4, room 019

Abstract:



With a large percentage of total system cost going to system administration tasks, self-management remains a difficult and important goal in systems. As a step towards the self-management vision, I will present a framework we have developed that enables systems to be self-predicting and answer ``what-if'' questions about their behavior with little or no administrator involvement. We have built a Resource Advisor inside two real systems: Microsoft's SQL Server database and the Ursa Minor storage system at Carnegie Mellon University. The Resource Advisor helps with upgrade and data placement decisions and provides what-if interfaces to external administrators (and internal tuning modules). The Resource Advisor is based on efficient system behavioral models that enable robust predictions in multi-tier systems.

Bio:

Eno Thereska is a PhD student at Carnegie Mellon University working with Prof. Greg Ganger. Eno has broad research interests in computer systems. Currently he is investigating ways to make the management of distributed systems easier. An approach he is currently pursuing puts sufficient instrumentation and modeling within the system, enabling it to answer several important what-if questions without outside intervention. He is interested in applying methods from queuing analysis (for components build from scratch) and machine learning (for legacy components) to this problem. As a testbed he is using Ursa Minor, a cluster-based storage system being deployed at Carnegie Mellon for researching system management issues. Concrete what-if questions in this system are about the effect of resource upgrades, service migration and data distribution. Eno received the Masters of Science (MS) degree in Electrical and Computer Engineering in 2003 at Carnegie Mellon University and the Bachelor of Science (BS) degree in Electrical and Computer Engineering and Computer Science in 2002 also at CMU. ------------------------------

Speaker's bio:

-



Scalable Byzantine Fault Tolerance
Rodrigo Rodrigues | INESC-ID, Lisbon

2007-04-04, 15:00 - 15:00
Saarbrücken building E1 5, room 019

Abstract:

The research presented in this talk attempts to extend Byzantine fault tolerance protocols to provide better scalability. I will cover two axes of scalability in such systems.

First, I will focus on systems with a large number of nodes, where existing solutions are not well-suited, since they assume a static set of replicas, or provide limited forms of reconfiguration. In the context of a storage system we implemented, I will present a membership service that is part of the overall design, but can be reused by any large-scale Byzantine-fault-tolerant system. The membership service provides applications with a globally consistent view of the set of currently available servers. Its operation is mostly automatic, to avoid human configuration errors; it can be implemented by a subset of the storage servers; it tolerates arbitrary failure of the servers that implement it; and it can itself be reconfigured.

The second aspect of scalability of Byzantine-fault-tolerant systems that this talk discusses is scalability with the size of the replica group (and, consequently, the number of faults that the system tolerates). I show a new replication protocol called HQ, that combines quorum-based techniques that are more scalable when there is no contention with more traditional state-machine replication protocols with quadratic inter-replica communication like Castro-Liskov's BFT that are useful for resolving concurrency issues.

Speaker's bio:

-



Consolidation of Software Systems - The Leonardo Approach
Erik Sandewall, | Department of Computer and Information Science,Linkoeping University

2006-11-14, 15:00 - 15:00
Saarbrücken building E1 4, room 024

Abstract:

In the Leonardo project we explore an alternative way of organizing the central software of computers whereby the redundant duplication of concepts and facilities should be greatly reduced. The kernel of the new system inherits traits from both operating system shells, programming language systems, discrete simulation systems, knowledge-based agent systems and others. It integrates a number of facilities that are usually considered as peripheral, such as an ontology structure and a version management facility. Facilities for communication between multiple such agent-like systems for the purpose of task-sharing and information exchange is an essential ingredient in the design.

In the talk I will briefly describe the present, experimental implementation and then discuss the project from the following points of view: - Short-term and long-term opportunities for major reform of computer software architecture - Discussion of what types of software systems can with advantage be assimilated into a comprehensive architecture, with particular emphasis on human-robot dialog systems in multi-robot environments - Perspective on textual data languages, such as XML and OWL, as compared with the representation developed in Leonardo - New aspects on logics of actions and change that are obtained when working with the actual system implementation.

Speaker's bio:

-



Scalable Network Management Using Ephemeral State
Ken Calvert | Laboratory for Advanced Networking, University of Kentucky

2006-11-02, 16:00 - 16:00
Saarbrücken building E1 4, room Rotunda 6th floor

Abstract:

It is widely acknowledged that today's network management tools are inadequate. Although the existing managment protocol (SNMP) has been and remains indispensible, it offers neither the scalability nor the functionality needed to manage large systems. Active/programmable networks and mobile agent systems have been proposed as alternative network management solutions that offer more functionality and potentially better scalability. Unfortunately, the flexibility of these (heavyweight) approaches comes with its own set of problems which prevent them from being widely adopted.

This talk will describe the use of ephemeral state processing (ESP) to efficiently monitor and collect information from large networks. ESP is a lightweight router-based programmable building block that can solve a range of network management problems while avoiding the problems that plague more complex approaches. Moreover, the simplicity of ESP allows it to be made available as a general-purpose service for all packets in the network. We demonstrate the utility of the service by showing how it can be used to solve some common network management problems.

(Joint work with J. Griffioen)

Speaker's bio:

-



Using formally defined design patterns to improve system developments.
Jean-Raymond Abrial, | Department of Computer Science,ETH Zuerich

2006-10-16, 15:00 - 15:00
Saarbrücken building E1 4, room 024

Abstract:

The concept of "design pattern" is well known in Object Oriented Technology. The main idea is to have some sort of reproducible engineering micro-design that the software designer can use in order to develop new pieces of software. In this presentation, I try to borrow such OO ideas and incorporate them within the realm of formal methods.

First, I will proceed by defining (and prove) two formal patterns, where the second one happens to be a refinement of the first one. As a matter of fact, one very often encounters such patterns in the development of reactive systems, where several chains of reactions can take place between an environment and a software controller, and vice-versa.

Second, the patterns are then used many times in the development of a complete reactive system where several pieces of equipment interact with a software controller. It results in a systematic construction made of six refinements. The entire system is proved completely automatically.

The relationship between the formal design patterns and the formal development of the problem at hand will be shown to correspond to a certain form of refinement. A demo will be shown.

Speaker's bio:

-



Electronic Voting: Risks and Research
Dan Wallach | Department of Computer Science,Rice University, Houston, Texas

2006-10-13, 14:00 - 14:00
Saarbrücken building E1 4, room 019

Abstract:

Hanging chads, among other issues with traditional voting systems, have sparked great interest in managing the election process through the use of newer electronic voting systems. While computer scientists, for the most part, have been warning of the perils of such action, vendors have forged ahead with their products, claiming increased security, reliability, and accuracy. Many municipalities have adopted electronic systems and the number of deployed systems is rising. To the limited extent that independent security analyses have been published, the results have raised serious reservations about the quality of these systems to resist attacks. This talk will describe problems we and other researchers have discovered and will consider the limitations of the certification processes that should have guaranteed some quality control. These issues, in turn, give rise to a variety of interesting research problems that span computer science, human factors, and public policy. In this talk, we will consider how both established and open research in software engineering, distributed systems, and cryptography can and should impact the next generation of voting systems.

Speaker's bio:

Dan Wallach is an associate professor in the Department of Computer Science at Rice University in Houston, Texas and is the associate director of NSF's ACCURATE (A Center for Correct, Usable, Reliable, Auditable and Transparent Elections). A collaborative project involving six institutions, ACCURATE is investigating software architectures, tamper-resistant hardware, cryptographic protocols and verification systems as applied to electronic voting systems. Wallach earned his bachelor's at the University of California at Berkeley and his PhD at Princeton University. His research involves computer security and the issues of building secure and robust software systems for the Internet. Wallach has testified about voting security issues before government bodies in the U.S., Mexico, and the European Union.



Adaptive Embedded Systems
Devika Subramanian | Rice University

2006-07-03, 14:00 - 14:00
Saarbrücken building E1 5, room 024

Abstract:

Adaptive Embedded Systems While embedded systems are ubiquitous, adaptive ones are not. My research goal is to push the science and engineering of adaptive embedded systems by exploring the addition of adaptivity to a diverse variety of complex systems. I will present my work on four current projects (1) tracking human learning on a complex visual-motor task (2) predicting conflict from events data extracted from wire stories (3) customizing application-specific compiler optimization sequences, and (4) learning regulatory networks in normal and cancer cells.

Speaker's bio:

Devika Subramanian obtained her undergraduate degree in electrical engineering and computer science from the Indian Institute of Technology, and her PhD in computer science from Stanford University in 1989. She is presently a Professor of Computer Science at Rice University, where she has been on the faculty since 1995. Her research interests are in the design and analysis of embedded adaptive systems and their applications in science and engineering (http://www.cs.rice.edu/~devika). Subramanian served as co-Program Chair for AAAI in 1999, and was on the IJCAI Advisory Board in 2001. She has given many invited lectures on her work. She has won teaching awards at Stanford, Cornell and at Rice. Her research has been funded by the National Science Foundation, National Institutes of Health, Office of Naval Research, Defense Advanced Research Projects Agency, and the Texas Advanced Technology Program.



Byzantine Fault-Tolerance and Beyond
Jean-Philippe Martin | University of Texas at Austin

2006-04-24, 15:00 - 15:00
Saarbrücken building E1.4, room 024

Abstract:

Computer systems should be trustworthy in the sense that they should reliably answer requests from legitimate users and protect confidential information from unauthorized users. Building this kind of systems is challenging, even more so in the increasingly common case where control is split between multiple administrative domains.

Byzantine fault tolerance techniques can elegantly provide reliability without overly increasing the complexity of the system and have recently earned the attention of the system community. In the first part this talk I discuss some of the contributions I have made toward practical Byzantine fault tolerance---in particular, how to reduce the cost of replication and how to reconcile replication with confidentiality. In the second part of the talk I argue that Byzantine fault-tolerance alone is not sufficient to deal with cooperative services under multiple administrative domain, where nodes may deviate from their specification not just because they are broken or compromised, but also because they are selfish. To address this challenge, I propose BAR, a new failure model that combines concepts from Byzantine fault-tolerance and Game Theory. I will describe BAR, present an architecture for building BAR services, and briefly discuss BAR-B, a BAR-tolerant cooperative backup system.

Speaker's bio:

Jean-Philippe Martin is a Ph.D. candidate at the Department of Computer Sciences at The University of Texas at Austin. He has a M.S. and received his B.S in Computer Sciences from the Swiss Federal Institute of Technology (EPFL). His main research interests are trustworthy systems, Byzantine fault-tolerance, and cooperative systems. His papers on cooperative services (SOSP'05) and fast Byzantine consensus (DSN'05) were recognized as some of the best papers at these conferences and were selected for publication in a journal.



Modular Static Analysis with Sets and Relations
Viktor Kuncak | MIT CSAIL

2006-04-06, 15:00 - 15:00
Saarbrücken building E1 4, room 022

Abstract:

We present a new approach for statically analyzing data structure consistency properties. Our approach is based on specifying interfaces of data structures using abstract sets and relations.This enables our system to independently verify that

1) each data structure satisfies internal consistency properties and each data structure operation conforms to its interface;

2) the application uses each data structure interface correctly, and maintains the desired global consistency properties that cut across multiple data structures.

Our system verifies these properties by combining static analyses, constraint solving algorithms, and theorem provers, promising an unprecedented combination of precision and scalability. The combination of different techniques is possible because all system components use a common specification language based on sets and relations.

In the context of our system, we developed new algorithms for computing loop invariants, new techniques for reasoning about sets and their sizes, and new approaches for extending the applicability of existing reasoning techniques. We successfully used our system to verify data structure consistency properties of examples based on computer games, web servers, and numerical simulations. We have verified implementations and uses of data structures such as linked lists with back pointers and iterators, trees with parent pointers, two-level skip lists, array-based data structures, as well as combinations of these data structures. This talk presents our experience in developing the system and using the system to build verified software.

Speaker's bio:

Viktor Kuncak is a Ph.D. candidate in the MIT Computer Science and Artificial Intelligence Lab. His interests include program analysis and verification, software engineering, programming languages and compilers, and formal methods.



Interfaces and Contracts
Matthias Felleisen | Northeastern University, Boston

2006-03-30, 14:00 - 14:00
Kaiserslautern building G26, room 024

Abstract:

Large software systems consist of dozens and hundreds of interlocking components. Software engineers adapt components from other suppliers, create their own, and glue all of these together in one large product. In this world, it becomes critical to pinpoint flaws quickly when software fails. Then the component consumer can request a fix from the producer of the faulty component or replace the component with an equivalent component from a different producer.

To get to this world, programmers must learn to use interfaces and to enrich them with contractual specifications. Programming language researchers must explore interface-oriented programming in its most radical form and must evaluate its pragmatic consequences. In this talk, I report on our first steps in this direction, presenting empirical findings, research results, research plans, and wild speculations.

Speaker's bio:

Matthias Felleisen is currently a Trustee Professor at Northeastern University. He joined its College of Computer and Information Science in 2001, after a 14-year career at Rice University in Houston with sabbaticals at Carnegie Mellon University in Pittsburgh and École Normale Supérieure in Paris.

He received his PhD from Daniel P. Friedman at Indiana University in 1984.

Felleisen's research career consists of two distinct 10-year periods. For the first ten years, he focused on the semantics of programming languages and its applications.

His work on operational semantics has become one of the standard working methods in programming languages. For the second ten years, Felleisen and his research group (PLT) developed a novel method for teaching introductory programming, including a new approach to program design and a programming environment for novice programmers (DrScheme). This environment has become a popular alternative to the conventional set of teaching tools and is now used at a couple of hundred colleges and high schools around the world. For Felleisen and his team, the construction of a large, realistic software application has posed many interesting and challenging research problems in programming languages, component programming, software contracts, and software engineering. Over the past 20 years, Felleisen has published several dozen research papers in scientific journals, conferences, and magazines. In addition, he has co-authored five books, including How to Design Programs and The Little LISPer (now called The Little Schemer), which, at the age of 30, is one of the oldest continuously published books in the field.



Reverse engineering the Internet: inter-domain topology and ab/use
Anja Feldmann | TU Munich

2006-03-27, 14:00 - 14:00
Kaiserslautern building G26, room 024

Abstract:

As the Internet is a human designed system one should have full knowledge about its operations and how it is used. Yet this is not the case as it is a prime example of a complex distributed software system that is managed in a decentral manner. With regards to how the Internet is managed I will present how to extract an inter-domain topology model that captures the route diversity of the Internet. With regards to how the Internet is used and abused by its users I will present extensions to an network intrusion detection system that enable us to perform dynamic application-layer protocol analysis and show how a time machine can be used for security forensics and network trouble-shooting.

Speaker's bio:

Anja Feldmann is a full professor for network architectures in the Computer Science department at the Technische Universitaet Muenchen, Germany. From 2000 to 2002 she was a professor for computer networking at Saarland University, Germany. Before that (1995 to 1999) she was a member of the Networking and Distributed Systems Center at AT&T Labs -- Research in Florham Park, New Jersey. Her current research interests include Internet measurement, traffic engineering and traffic characterization, network performance debugging, and intrusion detection. She received a M.S. degree in Computer Science from the University of Paderborn, Paderborn, Germany, in 1990 and M.S. and Ph.D. degrees in Computer Science from Carnegie Mellon University in Pittsburgh, USA, in 1991 and 1995, respectively.



Biological Systems as Reactive Systems
Luca Cardelli | Microsoft Research

2006-03-23, 14:00 - 14:00
Kaiserslautern building G26, room 24

Abstract:

Systems Biology is a new discipline aiming to understand the behavior of biological systems as it results from the (non-trivial, "emergent") interaction of biological components. We discuss some biological networks that are characterized by simple components, but by complex interactions. The components are separately described in stochastic pi-calculus, which is a "programming language" that should scale up to description of large systems. The components are then wired together, and their interactions are studied by stochastic simulation. Subtle and unexpected behavior emerges even from simple circuits, and yet stable behavior emerges too, giving some hints about what may be critical and what may be irrelevant in the organization of biological networks.

Speaker's bio:

Luca implemented the first compiler for ML (the most popular typed functional language) and one ot the earliest direct-manipulation user-interface editors. He was a member of the Modula-3 design committee and has designed a few experimental languages, of which the latest are Obliq, a distributed higher-order scripting language, and Polyphonic C#, an object-oriented language with modern concurrency abstractions. His more protracted research activity has been in establishing the semantic and type-theoretic foundations of object-oriented languages.

Lucas was born in Montecatini Terme, Italy, studied at the University of Pisa until 1979 and has a Ph.D. in computer science from the University of Edinburgh (1982). He worked at Bell Labs, Murray Hill, from 1982 to 1985 and at Digital Equipment Corporation, Systems Research Center in Palo Alto, from 1985 to 1997 before assuming his current position at Microsoft Research in Cambridge.



The Spec# Programming System
Rustan Leino | Microsoft Research

2006-03-20, 14:00 - 14:00
Kaiserslautern building G26, room 024

Abstract:

Spec# is a programming system that aims to provide programmers with a higher degree of rigor than in common languages today. The Spec# language extends the object-oriented .NET language C#, adding features like non-null types, pre- and postconditions, and object invariants. In addition to static type checking and compiler-emitted run-time checks for specifications, Spec# has a static program verifier. The program verifier translates Spec# programs into verification conditions, which are then analyzed by an automatic theorem prover. In this talk, I will give an overview of Spec#, including a demo. I will then discuss some aspects of its design in more detail.

Speaker's bio:

Dr. Rustan Leino is a researcher at Microsoft Research, where his research centers around programming tools.  He is currently working on the design and implementation of the Spec# programming language and its static program verifier.  Before joining Microsoft Research, Leino worked as a researcher at DEC/Compaq SRC, where among other things he led the Extended Static Checking for Java (ESC/Java) project, a program checker built on the technology of program verification.  His PhD thesis from Caltech (1995) addressed an important specification problem in ESC/Modula-3.  Before going to graduate school, Leino worked as a software developer and technical lead at Microsoft.



Type Systems for Multithreaded Software
Cormac Flanagan | University of California, Santa Cruz

2006-02-23, 16:00 - 16:00
Saarbrücken building 46.1 - MPII, room 024

Abstract:

Developing correct multithreaded software is very challenging, due to the potential for unintended interference between threads. We present type systems for verifying two key non-interference properties in multithreaded software: race-freedom and atomicity. Verifying atomicity is particularly valuable since atomic procedures can be understood according to their sequential semantics, which significantly simplifies subsequent (formal and informal) correctness arguments. We will describe our experience applying these type systems and corresponding type inference algorithms to standard multithreaded benchmarks and other applications, and illustrate some defects revealed by this approach.

Speaker's bio:

-



Interacting with Ubiquitous Computing Systems
Albrecht Schmidt | LMU Muenchen

2006-02-14, 16:00 - 16:00
Saarbrücken building E1 5, room 024 - Harald Ganzinger Hoersaal

Abstract:

Computing and communication devices are pervasively embedded into our everyday environments. Interaction in the context of the real world includes more and more interacting with complex mobile and embedded information systems. In many cases such interaction is multimodal and distributed between public and personal mobile devices. Advances in underlying network, processing, perception, and actuation technologies as well as new production techniques, such as 3D-printing, allow unprecedented options for creating novel user interfaces. However, as constraints that applied to the domain of mechanical and electrical user interfaces are not given anymore, there is a great risk of creating user interfaces where the conceptual model is not understandable anymore. To make pervasive computing usable, establishing appropriate interaction paradigms and metaphors is the great challenge.

Context-Aware Interaction, Implicit Interaction and Tangible User Interfaces are novel approaches which take into account that the interaction with information happens in the real world and is conceptually embedded into foreground tasks carried out by the user. To explore opportunities and challenges several prototypical systems were designed, implemented, and evaluated. Using case studies the talk presents innovative mobile phone applications that make use of contextual information, novel interactive devices, and conventional objects and computers that are enriched by pervasive technologies. This will outline the interplay between pervasive technologies, mobile systems and user experience.

Concentrating on what information is created and what information is consumed by the user while performing a task in the real world is the basic idea of Embedded Interaction. The focus is not on a single technology or a specific device. The aim is to seek optimal support for a task considering all technologies available in a certain context. This requires an understanding of different parameters of novel input and output technologies. The talk concludes with an outlook on research challenges that arise from the concept of Embedded Interaction related to current developments in the field of pervasive computing.

Speaker's bio:

Dr. Albrecht Schmidt is head of the Embedded Interaction research group in the computer science department at the University of Munich (Ludwig-Maximilians-Universität München), Germany. His general research interests are ubiquitous computing and context-awareness. In particularly he is interested in novel user interfaces and new interaction methods. Albrecht received a PhD in computer science from Lancaster University, UK, a MSc in Computer Science (Diplom) from University of Ulm, Germany, and a MSc in Computing from ManchesterMetropolitan University, UK. From 1998 to 2001 he was working as a research assistant at TecO at the University of Karlsruhe.



Programming Ad-hoc Networks of Mobile Devices
Ulrich Kremer | Rutgers University

2005-11-14, 11:00 - 11:00
Saarbrücken building E1 5, room 024

Abstract:

Ad-hoc networks of mobile devices such as smart phones and PDAs represent a new and exciting distributed system architecture. Building distributed applications on such an architecture poses new design challenges in programming models, languages, compilers, and runtime systems. This talk will introduce SpatialViews, a high-level language designed for programming mobile devices connected through a wireless ad-hoc network. SpatialViews allows specification of virtual networks with nodes providing desired services and residing in interesting spaces. These nodes are discovered dynamically with user-specified time constraints and quality of result (QoR). The programming model supports ``best-effort'' semantics, i.e., different executions of the same program may result in ``correct'' answers of different quality. It is the responsibility of the compiler and runtime system to produce a high-quality answer for the particular network and resource conditions encountered during program execution.

Example applications will be used to illustrate the different features of the SpatialViews language, and to demonstrate the expressiveness of the language and the efficiency of the compiler generated code. Sample applications include sensor network applications that collect and aggregate sensor data within the network, applications that use dynamic service installation and computation offloading, and augmented-reality gaming. The efficiency of the compiler generated code is verified through simulation and physical measurements. The reported results show that SpatialViews is an expressive and effective language for ad-hoc networks. In addition, compiler optimizations can significantly improve response times and energy consumption. More information about the language, compiler and runtime system, including a distribution of our prototype system, can be found at http://www.cs.rutgers.edu/spatialviews .

Speaker's bio:

-



Coccinelle: A Language-Based Approach to Managing the Collateral Evolution of Linux Device Drivers
Gilles Muller | Ecole des Mines de Nantes

2005-09-14, 14:30 - 14:30
Saarbrücken building E1 5, room 24

Abstract:

The Linux operating system is undergoing continual evolution. Evolution in the kernel and generic driver modules often triggers the need for corresponding evolutions in specific device drivers. Such collateral evolutions are tedious, because of the large number of device drivers, and error-prone, because of the complexity of the code modifications involved. We propose an automatic tool, Coccinelle, to aid in the driver evolution process. In this talk, we examine some recent evolutions in Linux and the collateral evolutions they trigger, and assess the corresponding requirements on Coccinelle.

Speaker's bio:

-



DREAM: a Component Framework for the Construction of Resource-Aware,Dynamically Configurable Communication Middleware
Vivien Quema | Institut National Polytechnique de Grenoble, INRIA Rhône-Alpes, France

2005-09-08, 11:00 - 12:30
Saarbrücken building 46.1 - MPII, room 024

Abstract:

In this talk, we present the work we are conducting at INRIA Rhône-Alpes on the design of component-based framework for the construction of autonomous systems.

Modern distributed computing systems are becoming increasingly complex. A major trend currently is to build autonomous systems, i.e. systems that reconfigure themselves upon occurrence of events such as software and hardware faults, performance degradation, etc. Building autonomous systems requires both a software technology allowing the development of administrable systems and the ability to build control loops in charge of regulating and optimizing the behavior of the managed system.

In this talk, we will mainly focus on the first requirement, i.e. providing a software technology for the development of administrable systems. We argue that better configurability can be reached through the use of component-based software frameworks. In particular, we present DREAM, a software framework for the construction of message-oriented middleware (MOMs). Several MOMs have been developed in the past ten years. The research work has primarily focused on the support of various non functional properties like message ordering,reliability, security, scalability, etc. Less emphasis has been placed on MOM configurability. From the functional point of view, existing MOMs implement a fixed programming interface (API) that provides a fixed subset of asynchronous communication models (publish/subscribe, event/reaction, message queues, etc.). From the non-functional point of view, existing MOMs often provide the same non-functional properties for all message exchanges, which reduces their performance. To overcome these limitations, we have developed DREAM (Dynamic REflective Asynchronous Middleware), a component framework for the construction of dynamically reconfigurable communication systems. The idea is to build a middleware as an assembly of interacting components, which can be statically or dynamically configured to meet different design requirements or environment constraints. DREAM provides a component library and a set of tools to build, configure and deploy middleware implementing various communication paradigms. DREAM defines abstractions and provides tools for controlling the use of resources (i.e. messages and activities) within the middleware. Moreover, it builds upon the Fractal component model, which provides support for hierarchical and dynamic composition. DREAM has been successfully used for building various forms of communication middleware: publish-subscribe (JMS), total order group communication protocols, probabilistic broadcast,asynchronous RPC, etc.

Speaker's bio:

-



Implementing Declarative Overlays
Timothy Roscoe | Intel Research Berkeley

2005-08-24, 15:00 - 15:00
Saarbrücken building E1 4, room 24

Abstract:

Overlay networks are used today in a variety of distributed systems ranging from file-sharing and storage systems to communication infrastructures. Overlays of various kinds have recently received considerable attention in the networked systems research community, partly due to the availability of the PlanetLab planetary-scale application platform. However, a broader historical perspective is that overlay functionality has implicitly long been a significant component of wide-area distributed systems.

Despite this, designing, building and adapting these overlays to an intended application and the target environment is a difficult and time consuming process.

To ease the development and the deployment of such overlay networks, my research group at Intel Berkeley in conjunction with the University of California at Berkeley is building P2, a system which uses a declarative logic language to express the overlay networks in a highly compact and reusable form. P2 can express a Narada-style mesh network in 13 rules, and the Chord structured overlay in only 35 rules. P2 directly parses and executes such specifications using a dataflow architecture to construct and maintain the overlay networks. I'll describe the P2 approach, how our implementation works, and give some experimental results showing that the performance and robustness of P2 overlays is acceptable.

Speaker's bio:

Timothy Roscoe received a PhD from the Computer Laboratory of the University of Cambridge, where he was a principal designer and builde rof the Nemesis operating system, as well as working on the Wanda microkernel and Pandora multimedia system. After three years working on web-based collaboration systems at an Internet startup company in North Carolina, Mothy joined Sprint's Advanced Technology Lab in Burlingame, California, as a researcher, where he worked on application hosting platforms, networking monitoring, and assorted systems management and security problems. Mothy joined Intel Research at Berkeley in April 2002, and has been a principal architect of PlanetLab, an open, shared platform for developing and deploying planetary-scale services. His current research interests include distributed query processing and its relationship to network routing; network architecture; and high-performance operating systems.



Latest news about lock-free object implementations
Petr Kouznetsov | EPFL

2005-07-08, 10:00 - 10:00
Saarbrücken building 46.1 - MPII, room 024

Abstract:

Lock-free implementations of shared data objects do not rely on any form of mutual exclusion, and thereby allow processes to overcome adverse operating system affects. Wait-free implementations provide the strongest form of lock-freedom and guarantee that every process can complete its operation, regardless of the execution speeds of other processes. They are in this sense very appealing, but turn out to be impossible or very expensive to achieve in many practical settings. Recently, researchers suggested a weaker liveness property, called obstruction-freedom, that guarantees progress only when there is no contention, which is argued to be the most common case in practice. However, the notion of contention was very widely interpreted, and, as a result, the limitations of implementations ensuring only these weaker properties remained unclear.

We formally define an adequate measure of contention, which we call step contention, and present a generic obstruction-free implementation that ensures progress for step contention-free operations. Our implementation is linear in time and space with respect to the number of concurrent processes. We show that these complexities are asymptotically optimal, and hence generic obstruction-free implementations are inherently expensive.

Speaker's bio:

-



Measurement-driven Modeling and Design of Internet-scale Systems
Krishna Gummadi | University of Washington

2005-05-30, 17:00 - 17:00
Saarbrücken building 46.1 - MPII, room 024

Abstract:

The Internet is huge, complex, and rapidly evolving. Understanding how today's Internet-scale systems work is challenging, but crucial when designing the networks and applications of tomorrow. In this talk, I will describe how I have used a combination of measurement, modeling, and analysis to understand two Internet-scale systems: (1) peer-to-peer (P2P) file-sharing systems and their workloads, and (2) indirection routing systems that recover from Internet path failures.

In part because of the rise in popularity of P2P systems, multimedia workloads have become the dominant source of Internet traffic. Our measurements show that multimedia workloads are substantially different from traditional Web workloads. Based on an analysis of a 6-month long trace of the Kazaa P2P system, I will propose a new model for multimedia workloads and will use it to explain how a few, simple, fundamental factors drive them.

In the second part of my talk, I will focus on understanding Internet path failures and indirection-based recovery schemes. I will first characterize the frequency and location of Internet path failures that occur in practice. Using insights drawn from our measurements, I will show how a simple, stateless, and scalable scheme called "one-hop source routing" achieves close to the maximum possible recovery attainable by any indirection routing scheme. I will also relate experiences we gained from implementing and deploying one-hop source routing on PlanetLab.

Speaker's bio:

-



OpenDHT: A Public DHT Service
Sean Rhea | UC Berkley

2005-04-11, 16:00 - 16:00
Saarbrücken building 46.1 - MPII, room 024

Abstract:

Large-scale distributed systems are hard to deploy, and distributed hash tables (DHTs) are no exception. To lower the barriers facing DHT-based applications, we have created a public DHT service called OpenDHT. Designing a DHT that can be widely shared, both among mutually-untrusting clients and among a variety of applications, poses two distinct challenges. First, there must be adequate control over storage allocation so that greedy or malicious clients do not use more than their fair share. Second, the interface to the DHT should make it easy to write simple clients, yet be sufficiently general to meet a broad spectrum of application requirements. In this talk I will describe our solutions to these design challenges. I'll also report on our early deployment experiences with OpenDHT and describe the variety of applications already using the system.

Speaker's bio:

-



A Reboot-based Approach to High Availability
George Candea | Stanford University

2005-04-07, 17:00 - 17:00
Saarbrücken building 46.1 - MPII, room 024

Abstract:

Application-level software failures are a dominant cause of outages in large-scale software systems, such as e-commerce, banking, or Internet services. The exact root cause of these failures is often unknown and the only cure is to reboot. Unfortunately, rebooting can be expensive, leading to nontrivial service disruption or downtime even when clusters and failover are employed.

In this talk I will describe the "crash-only design," a way to build reboot-friendly systems. I will also present the "microreboot," a technique for surgically recovering faulty application components without disturbing the rest. I will argue quantitatively that recovery-oriented techniques complement bug-reduction efforts and provide significant improvements in software dependability. We applied the crash-only design and microreboot technique to a satellite ground station and an Internet auction system. Without fixing any bugs, microrebooting recovered most of the same failures as process restarts, but did so more than an order of magnitude faster and with an order of magnitude savings in lost work.

Simple, cheap recovery engenders a new way of thinking about failure management. First, we can prophylactically microreboot to rejuvenate a software system by parts; this averts failures induced by software aging, without ever having to bring the system down. Second, we can mask failure and recovery from end users through transparent call-level retries, turning failures into human-tolerable sub-second latency blips. Finally, having made recovery very cheap, it makes sense to microreboot at the slightest hint of failure -- if the microreboot is indeed necessary, we speed up recovery; if not, the impact is negligible. As a result, we productively employed failure detection based on statistical learning, which reduces false negatives at the cost of more frequent false positives. We also closed the monitor-diagnose-recover loop and built an autonomously recovering Internet service, exhibiting orders of magnitude higher availability than previously possible.

Speaker's bio:

-