Talks by visitors to the Department

2024 talks

  • Modeling Nonstrategic Human Play in Games


    Speaker:

    Kevin Leyton-Brown, University of British Columbia

    Date:2024-04-05
    Time:17:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    It is common to assume that players in a game will adopt Nash equilibrium strategies. However, experimental studies have demonstrated that Nash equilibrium is often a poor description of human players' behavior, even in unrepeated normal-form games. Nevertheless, human behavior in such settings is far from random. Drawing on data from real human play, the field of behavioral game theory has developed a variety of models that aim to capture these patterns.


    This talk will survey over a decade of work on this topic, built around the core idea of treating behavioral game theory as a machine learning problem. It will touch on questions such as:

    - Which human biases are most important to model in single-shot game theoretic settings?

    - What loss function should be used to evaluate and fit behavioral models?

    - What can be learned about examining the parameters of these models?

    - How can richer models of nonstrategic play be leveraged to improve models of strategic agents?

    - When does a description of nonstrategic behavior "cross the line" and deserve to be called strategic?

    - How can advances in deep learning be used to yield stronger--albeit harder to interpret--models?


    Finally, there has been much recent excitement about large language models such as GPT-4. The talk will conclude by describing how the economic rationality of such models can be assessed and presenting some initial experimental findings showing the extent to which these models replicate human-like cognitive biases.


    Bio:

    Kevin Leyton-Brown is a professor of Computer Science and a Distinguished University Scholar at the University of British Columbia. He also holds a Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute and is an associate member of the Vancouver School of Economics. He received a PhD and an M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies artificial intelligence, mostly at the intersection of machine learning and either the design and operation of electronic markets or the design of heuristic algorithms. He is increasingly interested in large language models, particularly as components of agent architectures. He believes we have both a moral obligation and a historical opportunity to leverage AI to benefit underserved communities, particularly in the developing world.

    He has co-written over 150 peer-refereed technical articles and two books ("Multiagent Systems" and "Essentials of Game Theory"); his work has received over 26,000 citations and an h-index of 61. He is an Fellow of the Royal Society of Canada (RSC; awarded in 2023), the Association for Computing Machinery (ACM; awarded in 2020), and the Association for the Advancement of Artificial Intelligence (AAAI; awarded in 2018). He was a member of a team that won the 2018 INFORMS Franz Edelman Award for Achievement in Advanced Analytics, Operations Research and Management Science, described as "the leading O.R. and analytics award in the industry." He and his coauthors have received paper awards from AIJ, JAIR, ACM-EC, KDD, AAMAS and LION, and numerous medals for the portfolio-based SAT solver SATzilla at international SAT solver competitions (2003–15).

    He has co-taught two Coursera courses on "Game Theory" to over a million students (and counting!), and has received awards for his teaching at UBC—notably, a Killam Teaching Prize. He served as General Chair of the 2023 ACM Conference on Economics and Computation (ACM-EC); Program Co-Chair for AAAI 2021 (one of the top two international conferences on artificial intelligence), amongst others. He currently advises Auctionomics, AI21, and OneChronos. He is a co-founder of Kudu.ug and Meta-Algorithmic Technologies. He was scientific advisor to UBC spinoff Zite until it was acquired by CNN in 2011. His past consulting has included work for Zynga, Qudos, Trading Dynamics, Ariba, and Cariocas.



  • Data Management for Data Science: a study on space-efficiency


    Speaker:

     Prof. Panagiotis Karras ,  computer science with the University of Copenhagen

    Date:2024-04-01
    Time:16:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Data science has the potential to extract valuable insights from data on an unprecedented scale. However, several fundamental data science tasks call for data management solutions that can effectively address problems of space-efficiency to realize this potential. This talk will focus on two cases of data management solutions that enhance scalability and space-efficiency in data science tasks. Firstly, we will discuss how to use a sophisticated technique to render the classical solution for Viterbi decoding via dynamic programming more space-efficient. Secondly, we will outline how to compute the optimal actions in a finite-horizon Markov Decision Process in a space-efficient manner. Thereby, we will outline a vision of how data management expertise can facilitate and advance the frontiers of data science.


    Bio:

    Panagiotis Karras is a professor of computer science with the University of Copenhagen. His research interests include designing robust and versatile methods for data access, mining, analysis, and representation. He received the MSc degree in electrical and computer engineering from the National Technical University of Athens and the PhD degree in computer science from the University of Hong Kong. He was the recipient of the Hong Kong Young Scientist Award, the Singapore Lee Kuan Yew Postdoctoral Fellowship, the Rutgers Business School Teaching Excellence Fellowship, and the Skoltech Best Faculty Performance Award. His work has been published in PVLDB, SIGMOD, ICDE, KDD, AAAI, IJCAI, NeurIPS, ICLR, USENIX Security, TheWebConf, SIGIR, and ACL.



  • A billion lifelong readers: The Same Language Subtitling (SLS) story of system change from concept to national policy to quality implementation.


    Speaker:

    Dr. Brij Kothari, Adjunct Professor, School of Public Policy, IIT-D and Lead, Billion Readers (BIRD) Initiative

    Date:2024-04-01
    Time:15:30:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    The weak foundational literacy outcomes of our complex school system have been known for decades, resulting in 600 million weak readers in addition to 250 million non-readers. Over 60 percent are girls and women. The Billion Readers (BIRD) Initiative's vision is: Every Indian a fluent reader. BIRD leverages India's vibrant and multilingual entertainment ecosystem to deliver guaranteed daily and lifelong reading practice to one billion small screen (TV, streaming & mobile) viewers.

    Brij will focus on the system change strategy that BIRD has pursued with government, civil society, academia and media companies, spanning 28 years, including a mix of evidence-based-policymaking, advocacy, coalition-building with disability rights groups, design thinking, technology development, and legal tools. Having recently joined as Adjunct Professor at SPP, IIT-D, Brij is actively exploring cross-disciplinary collaboration with faculty and students and has several projects and internship possibilities to suggest. He is especially looking for collaboration on AI-based speech-to-text tech projects in Indian languages to make entertainment accessible in cinema halls, on TV and mobiles/streaming.


    Bio:

    Dr. Brij Kothari is an academic and social entrepreneur. He recently joined the School of Public Policy, IIT-D as Adjunct Professor. He leads the Billion Readers (BIRD) Initiative and is the founder of PlanetRead.org and BookBox.com. Brij conceived of Same Language Subtitling (SLS) on mainstream TV in India for mass reading literacy in 1996, while pursuing his Ph.D. in Education at Cornell University. Since then, he has researched and pushed for SLS in national broadcast policy on the faculty of IIM-Ahmedabad (1996-2023). He is an Ashoka Fellow, a Schwab Social Entrepreneur, the recipient of the International Literacy Prize from the Library of Congress, USA, and Co-Impact's system change grant for BIRD. At IIT-D he is looking for collaboration, coffee, and tennis partners.

     



  • Linguistically-Informed Neural Architectures for Lexical, Syntactic, and Semantic Tasks in Sanskrit


    Speaker:

    Dr. Jivnesh Sandhan, IIT Dharwad

    Date:2024-03-21
    Time:11:00:00 (IST)
    Venue:SIT #001
    Abstract:

    In this talk, we will focus on how to make Sanskrit manuscripts more accessible to end-users through natural language technologies. The morphological richness, compounding, free word orderliness, and low-resource nature of Sanskrit pose significant challenges for developing deep learning solutions. We identify four fundamental tasks, which are crucial for developing a robust NLP technology for Sanskrit: word segmentation, dependency parsing, compound type identification, and poetry analysis. While addressing these challenges, we make various contributions, such as proposing linguistically-informed neural architectures, showcasing their interpretability and multilingual extension, reporting state-of-the-art performance, and presenting a neural toolkit called SanskritShala, which offers real-time analysis for NLP tasks.


    Bio:

    Dr. Jivnesh Sandhan is a visiting assistant professor at IIT Dharwad in the Department of Computer Science. Prior to that, he remotely worked in the Electrical Engineering and Computer Sciences (EECS) department at the University of California, Berkeley. He completed his Ph.D. in the Department of Electrical Engineering from IIT Kanpur in 2023, where he also earned a dual degree in the Department of Mathematics and Scientific Computing in 2018. He received the prestigious Prime Minister's Research Fellowship (PMRF). His research expertise is Natural Language Processing (NLP) for Sanskrit Computational Linguistics. His primary research objective is to enhance accessibility to Sanskrit literature for pedagogical and annotation purposes. To achieve this goal, he has developed cutting-edge deep-learning-based solutions for various downstream tasks in Sanskrit. His scholarly endeavors have resulted in several publications in high-ranking conference venues, including CORE-ranking A*/A conferences. His current research revolves around developing a Sanskrit-to-English machine translation system to provide accessibility to Vedic literature. Through his work, he seeks to bridge the language barrier and contribute to a broader understanding and appreciation of ancient Sanskrit texts.



  • From Biased Observations to Fair and More Effective Decisions


    Speaker:

    Nisheeth Vishnoi is the A. Bartlett Giamatti Professor of Computer Science and a co-founder of the Computation and Society Initiative at Yale University

    Date:2024-03-19
    Time:17:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Data from individuals is extensively utilized by various organizations, from multinational corporations to educational institutions, to inform decisions about individuals. However, this data often emerges from the interaction between the individual being observed and the measurement process, whether conducted by humans or AI systems. This observed data often represents a biased version of the 'true' data, and basing decisions on such data can significantly affect their fairness and effectiveness, impacting individuals, organizations, and society as a whole.

    This raises critical questions of understanding when and to what extent algorithms can be designed to behave as if they had access to true data. This talk outlines an approach to these questions for the ubiquitous subset selection problem important in hiring and admissions. It starts with behavioral models that illustrate the
    transformation of true data into biased data. It then analyzes the impact of existing algorithms when working with such data, and concludes by proposing new algorithms designed to mitigate these biases.

    This talk is based on joint works with several co-authors and is suited for a wide audience, including students, academics,
    professionals, and anyone interested in the ethical or policy dimensions of data science and AI.



    Bio:

    Nisheeth Vishnoi is the A. Bartlett Giamatti Professor of Computer Science and a co-founder of the Computation and Society Initiative at Yale University.  He is a co-PI of an NSF-funded AI Institute: The Institute for Learning-enabled Optimization at Scale. His research spans various areas of Theoretical Computer Science, Optimization, and
    Artificial Intelligence. Specific current research topics include Responsible AI, foundations of AI, and data reduction methods.  He is also interested in understanding nature and society from a computational viewpoint.


    Professor Vishnoi was the recipient of the Best Paper Award at IEEE Symposium on Foundations of Computer Science in 2005, the IBM Research Pat Goldberg Memorial Award in 2006, the Indian National Science Academy Young Scientist Award in 2011, the IIT Bombay Young Alumni Achievers Award in 2016, and the Best Paper award at ACM Conference on Fairness, Accountability, and Transparency in 2019.  He was named an ACM Fellow in 2019.  His most recent book Algorithms for Convex Optimization was published by Cambridge University Press.



  • Towards Robust and Reliable Machine Learning: Adversaries and Fundamental Limits


    Speaker:

    Arjun Bhagoji, University of Chicago

    Date:2024-03-04
    Time:11:00:00 (IST)
    Venue:SIT #001
    Abstract:

    While ML-based AI systems are increasingly deployed in safety-critical settings, they continue to remain unreliable under adverse conditions that violate underlying statistical assumptions. In my work, I aim to (i) understand the conditions under which a lack of reliability can occur and (ii) reason rigorously about the limits of robustness, during both training and test phases.

    In the first part of the talk, I demonstrate the existence of strong but stealthy training-time attacks on federated learning, a recent paradigm in distributed learning. I show how a small number of compromised agents can modify model parameters via optimized updates to ensure desired data is misclassified by the global model, while bypassing custom detection methods. Experimentally, this model poisoning attack leads to a lack of reliable prediction on standard datasets.

    Test-time attacks via adversarial examples, i.e. imperceptible perturbations to test inputs, have sparked an attack-defense arms race. In the second part of the talk, I step away from this arms race to provide model-agnostic fundamental limits on the loss under adversarial input perturbations. The robust loss is shown to be lower bounded by the optimal transport cost between class-wise distributions using an appropriate adversarial point-wise cost, the latter of which can be efficiently computed via a linear program for empirical distributions of interest.

    To conclude, I will discuss my ongoing efforts and future vision towards building continuously reliable and accessible ML systems by accounting for novel attack vectors and new ML paradigms such as generative AI, as well as developing algorithmic tools to improve performance in data-scarce regimes.


    Bio:

    Arjun Bhagoji is a Research Scientist in the Department of Computer Science at the University of Chicago. He obtained his Ph.D. in Electrical and Computer Engineering from Princeton University, where he was advised by Prateek Mittal. Before that, he received his Dual Degree (B.Tech+M.Tech) in Electrical Engineering at IIT Madras, where he was advised by Andrew Thangaraj and Pradeep Sarvepalli. Arjun's research has been recognized with a Spotlight at the NeurIPS 2023 conference, the Siemens FutureMakers Fellowship in Machine Learning (2018-2019) and the 2018 SEAS Award for Excellence at Princeton University. He was a 2021 UChicago Rising Star in Data Science, a finalist for the 2020 Bede Liu Best Dissertation Award in Princeton's ECE Department and a finalist for the 2017 Bell Labs Prize.



  • Trends and recent results in the study of non-interactive multi-party computation (NIMPC)


    Speaker:

    Prof. Tomoharu Shibuya from Sophia University 

    Date:2024-03-01
    Time:16:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    A large amount of data is required to improve the accuracy of machine learning. However, data often contains personal privacy and corporate sensitive information, making it difficult to freely utilize large amounts of data. Therefore, security technologies that perform various calculations on data while maintaining data confidentiality are attracting attention.

    Secure Multi Party Computation (MPC) is a method for parties to jointly compute a function over their inputs while keeping those inputs private. MPC has the drawback that as the number of participating parties increases, the amount of communication between the parties becomes enormous. To overcome this drawback, Secure non-interactive MPC (NIMPC) was developed, which introduces a protocol setup server and a computation server, and each party communicates only with these servers.

    In this talk, I will explain a simple method for realizing NIMPC and introduce recent research on NIMPC. In particular, we will introduce an evolving NIMPC that can perform calculations without changing the setup at the start of the protocol even if the number of parties increases after the protocol starts.

    I will also provide a comprehensive introduction to the research and faculty members at the Department of Information and Communication Sciences, Sophia University, and discuss possibilities for student and research exchanges between IIT Delhi and Sophia University.


    Bio:

    Prof. Tomoharu Shibuya

    (https://researchmap.jp/read0183734?lang=en) 



  • Heterogenous Benchmarking across Domains and Languages: The Key to Enable Meaningful Progress in IR Research.


    Speaker:

    Nandan Thakur

    Date:2024-01-23
    Time:15:00:00 (IST)
    Venue:SIT #001
    Abstract:

    Benchmarks are ever so necessary to measure realistic progress within Information Retrieval. However, existing benchmarks quickly saturate as they are prone to overfitting affecting retrieval model generalization. To overcome these challenges, I would present two of my research efforts: BEIR, a heterogeneous benchmark for zero-shot evaluation across specialized domains, and MIRACL, a monolingual benchmark covering a diverse range of languages. In BEIR, we show that neural retrievers surprisingly struggle to generalize zero-shot on specialized domains due to a lack of training data. To overcome this, we develop GPL that distills cross-encoder knowledge using cross-domain BEIR synthetic data. On the language side, MIRACL is robust in annotations and includes a broader coverage of the languages. However, generating supervised training data is cumbersome in realistic settings. To supplement, we construct SWIM-IR, a synthetic training dataset with 28 million LLM-generated pairs across 37 languages to develop multilingual retrievers comparable to supervised models in performance. We can cheaply extend to several new languages.


    Bio:

    Nandan Thakur is a third-year PhD student in the David R. Cheriton School of Computer Science at the University of Waterloo under the supervision of Prof. Jimmy Lin. His research broadly investigates data efficiency and model generalization across specialized domains and languages in information retrieval. He was the co-organizer of the MIRACL competition in WSDM 2023 and will co-organize the upcoming RAG Track in TREC 2024. His work has been published in top conferences and journals, including ACL, NAACL, NeurIPS, SIGIR, and TACL.



  • LLMs for Everybody: How inclusive are the LLMs today and Why should we care?


    Speaker:

    Monojit Choudhury , professor of Natural Language Processing at Mohd bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi

    Date:2024-01-23
    Time:17:00:00 (IST)
    Venue:SIT #001
    Abstract:

    Large Language Models (LLMs) have revolutionized the field of NLP and natural human-computer interactions; they hold a lot of promise, but are these promises equitable across countries, languages and other demographic groups? Research from our group as well as from around the world is constantly revealing that LLMs are biased in terms of their language processing abilities in most but a few of the world's languages, cultural awareness (or lack thereof) and value alignment. In this talk, I will highlight some of our recent findings around value alignment bias in the models and argue why we need models that can reason generically across moral values and cultural conventions.
    We will also discuss some of the opportunities for students at postgraduate, PhD and Post doctoral levels at the newly founded MBZUAI university.


    Bio:

    Monojit Choudhury is a professor of Natural Language Processing at Mohd bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi. Prior to this, he was a principal scientist at Microsoft Research Lab and Microsoft Turing, India. He is also a professor of practice at Plaksha University, and an adjunct professor at IIIT Hyderabad. Prof Choudhury's research interests lie in the intersection of NLP, Social and Cultural aspects of Technology use, and Ethics. In particular, he has been working on multilingual aspects of large language models (LLMs), their use in low resource languages and making LLMs more inclusive and safer by addressing bias and fairness aspects. Prof Choudhury is the general chair of Indian national linguistics Olympiad and the founding co-chair of Asia-Pacific linguistics Olympiad. He holds a BTech and PhD degree in Computer Science and Engineering from IIT Kharagpur.



  • Geometric GNNs for 3D Atomic Systems


    Speaker:

    Chaitanya K. Joshi

    Date:2024-01-18
    Time:15:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Recent advances in computational modelling of atomic systems, spanning molecules, proteins, and materials, represent them as geometric graphs with atoms embedded as nodes in 3D Euclidean space. Geometric Graph Neural Networks have emerged as the preferred ML architecture powering breakthroughs ranging from protein structure prediction to molecular simulations and material generation. Their specificity lies in the inductive biases they leverage — such as the underlying physical symmetries and chemical properties — to learn informative representations of geometric graphs. This talk will provide an overview of Geometric GNNs for 3D atomic systems. I will introduce a pedagogical taxonomy of Geometric GNN architectures from the perspective of their theoretical expressive power and highlight practical shortcomings of current models. This talk is based on our recent works: https://arxiv.org/abs/2301.09308, https://arxiv.org/abs/2312.0751


    Bio:

    Chaitanya K. Joshi is a 3rd year PhD student at the Department of Computer Science, University of Cambridge, supervised by Prof. Pietro Liò. His research explores the intersection of Geometric Deep Learning and Graph Neural Networks for applications in biomolecule modelling & design. He previously did an undergraduate degree in Computer Science from Nanyang Technological University and worked as a Research Engineer at A*STAR in Singapore.



  • Introduction to Digital Forensics


    Speaker:

    Dr. Andrey Chechulin

    Date:2024-01-18
    Time:12:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    This lecture is a foundational exploration into Digital Forensics, a discipline focusing on the identification, extraction, preservation, and analysis of digital evidence. The relevance of it spans across criminal and civil law where digital evidence is increasingly pivotal. During the lecture we will discuss the broad spectrum of digital evidence, from computer systems to mobile devices, and the unique challenges each presents. The lecture will highlight the critical role digital forensics plays in solving cybercrimes and in resolving legal disputes involving digital data. In addition to theoretical aspects, the examples of practical application of digital forensics will be discussed. Designed for beginners and professionals alike, such as IT experts, lecturers, or students, this lecture aims to impart a comprehensive understanding of digital forensics and its indispensable role in contemporary digital investigations.


    Bio:

    Andrey Chechulin is a Candidate of Technical Sciences (2013, SPbSUT, Russia) and an Associate Professor (2021, SPbSUT, Russia). Currently, he is the Head of the International Digital Forensic Center for Digital Forensics and a leading researcher at the Laboratory of Computer Security Problems of the SPC RAS (Saint-Petersburg, Russia). He is also an associate professor at SPbSUT and ITMO Universities. He has been an invited professor and a scientific advisor of master and PhD students at universities in France, Sweden, and Russia. He is member of many editorial boards of Russian and international journals, and the author of more than 200 refereed publications, including several books and monographs. As a project leader, he has participated in over 15 Russian and international scientific projects for the Russian and EU scientific foundations and commercial companies in Russia and abroad. As a security expert, he has conducted more than 200 expert assessments both in the practical field of cybercrime investigation and court cases and in the academic field, serving as a reviewer for leading international journals, conferences, and research foundations. As a science communicator, he regularly appears on various regional and federal media broadcasts and delivers public lectures on information security. His main research interests include digital forensics, computer network security, artificial intelligence, cyber-physical systems, social network analysis, and security data visualization.



  • Network Security and Vulnerabilities Analysis


    Speaker:

    Dr. Dmitry Levshun

    Date:2024-01-18
    Time:12:45:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Scientists and developers all over the world are working hard to ensure the information security of network systems. This task is complex due to the diversity of threats and wide range of security requirements. Moreover, specialists detect new vulnerabilities every day, while old vulnerabilities are still present in working systems. The goal of this lecture is to provide the information about the basics of network security evaluation using attack graphs. We will go deep into details how vulnerabilities can be represented in open databases as well as how we can categorise them. After that we will go step by step through the host attack graph construction and analysis. In the end we will discuss how Artificial Intelligence can be used to improve vulnerabilities categorisation.


    Bio:

    Dmitry Levshun is a Candidate of Technical Sciences (ITMO University, Russia) and a Doctor of Philosophy in Computer Science (University of Toulouse III, France). Author of more than 30 publications indexed by Scopus and Web of Science (H-index 8), 5 of which are included in the Q1 quartile. Has over 20 certificates of state registration of programs and databases. Active participant in more than 15 research and development projects of Russian funds. Head of the initiative research project conducted by young scientists. He works as a Senior Researcher at the Laboratory of Computer Security Problems of SPC RAS. Additionally, he works as a leading expert at the International Center for Digital Forensics of SPC RAS. Moreover, Dmitry is an Associate Professor at leading universities in St. Petersburg, namely SPbSUT (Secure communication systems department) and EUSPb (Applied data analysis program). Member of the program committee of FRUCT and COMSNETS conferences. Reviewer for scientific journals such as Electronics, Machines, Micromachines, Inventions, Future Internet and Microprocessors and Microsystems. Area of scientific interests: information security, Internet of Things, artificial intelligence, security by design, modeling of malicious activity.



  • Artificial Intelligence for Cyber Security


    Speaker:

    Dr. Igor Kotenko 

    Date:2024-01-17
    Time:16:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Artificial intelligence (AI) has become one of the main approaches to processing huge amounts of heterogeneous data and performing various cyber security tasks, including vulnerability management and security assessment, security monitoring, distributed access control. AI is changing the way computers are programmed and how they are used. In the modern interpretation, AI systems are systems, first, of machine learning, and sometimes these AI systems are even more narrowed down to artificial neural networks. In cyber security, AI methods provided the opportunity to create advanced cyber security tools, but also allowed attackers to significantly improve the cyber attacks. The evolution of attack and defense tools took place mainly in the form of an arms race, which in its essence was asymmetric and beneficial to attackers. Cybercriminals can launch targeted attacks at unprecedented speed and scale, while bypassing traditional detection mechanisms. The talk shows the current state of AI in cyber security. The key areas of focus at the intersection of AI and cyber security are analyzed: enhancing cyber security with AI, AI for cyber attacks, the vulnerability of AI systems to attacks, and the use of AI in malicious information operations. The own research in the field of intelligent monitoring of cyber security and detection of cyber attacks is presented. This research is being supported by the grant of Russian Science Foundation #21-71-20078 in SPC RAS.


    Bio:

    Igor Kotenko is a Chief Scientist and Head of Research Laboratory of Computer Security Problems of the St. Petersburg Federal Research Center of the Russian Academy of Sciences. He is also Professor of ITMO University, St. Petersburg, Russia, and Bonch-Bruevich Saint-Petersburg State University of Telecommunications. He is the Honored Scientist of the Russian Federation, IEEE Senior member, member of many Editorial Boards of Russian and International Journals, and the author of more than 800 refereed publications, including 25 books and monographs. Main research results are in artificial intelligence, telecommunication, cyber security, including network intrusion detection, modeling and simulation of network attacks, vulnerability assessment, security information and event management, verification and validation of security policy. Igor Kotenko was a project leader in the research projects from the European Office of Aerospace Research and Development, EU FP7 and FP6 Projects, HP, Intel, F-Secure, Huawei, etc. The research results of Igor Kotenko were tested and implemented in multitude of Russian research and development projects, including grants of Russian Science Foundation, Russian Foundation of Basic Research and multitude of State contracts. He has been a keynote and invited speaker on multitude of international conferences and workshops, as well as chaired many international conferences.



  • A New Perspective on Invariant Generation as Semantic Unification


    Speaker:

    Prof. Deepak Kapur, University of New Mexico

    Date:2024-01-12
    Time:15:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Unification is the problem of finding instantiations of variables in a finite set of equations constructed using function symbols such that both sides of the instantiated equations are equal. In semantic unification, also called E-unification, function symbols can have properties specified typically by an equational theory; a unifier then makes the two instantiated sides of each equation, equivalent modulo the equation theory. By generalizing the unification problem to a first-order theory in which variables in the problem stand for formulas in the theory, the invariant generation problem in software, hardware, and cyber-physical system can be formulated as a unification problem. Finding a nontrivial unifier in this case amounts to finding an invariant which is a formula in the theory. Similarly, finding a most general unifier in that theory amounts to finding the strongest invariant. Instantiation of variables can be further restricted to formulas with certain shapes/properties. A number of examples from the literature of the automatic generation of loop invariants in software will be used to illustrate this new perspective.


    Bio:

    A distinguished professor at the University of New Mexico since 1998, Kapur served as chair of the Department of Computer Science from Dec. 1998 to June 2006. He has adjunct appointments at IIT, Delhi, India, as well as Tata Institute of Fundamental Research, Mumbai, India. From 1980-1987, he was on the research staff of General Electric Corporate Research and Development, Schenectady, NY. He was appointed tenured full professor at the University at Albany, SUNY, and Albany, NY, in 1988, where he also founded the Institute for Programming and Logics. He has had research collaborations all over the world including TIFR, India; MPI, Saarbrucken, Germany; Chinese Academy of Sciences, Beijing; IMDEA, Madrid, and UPC, Barcelona; Naval Research Lab, Washington. He serves on the editorial boards of numerous journals including the Journal of Symbolic Computation and Journal of Automated Reasoning, for which he also served as the editor-in-chief from 1993-2007. Kapur is on the board of United Nations University-International Institute for Software Technology as well as LIPIcs: Leibniz International Proceedings in Informatics. Kapur was honored with the Herbrand Award in 2009 for distinguished contributions to automated reasoning.



  • In Search of a Networking Unicorn: Realizing Closed-Loop ML Pipeline for Networking


    Speaker:

    Arpit Gupta, Assistant Professor, UCSB

    Date:2024-01-11
    Time:12:00:00 (IST)
    Venue:Bharti 501
    Abstract:

    Machine Learning (ML) and Artificial Intelligence (AI) are driving

    transformative changes across various domains, including networking.

    It is widely assumed that ML/AI-based solutions to complex security or

    performance-specific problems outperform traditional heuristics and

    statistical methods. However, this optimism raises a fundamental

    question: Can our current ML/AI-based solutions be used for

    high-stakes decision-making in production networks where errors can

    have serious consequences? Unfortunately, many of these solutions have

    struggled to fulfill their promises. The primary issues stem from the

    use of inadequate training data and an overemphasis on narrowly scoped

    performance metrics (e.g., F1 scores), neglecting other critical

    aspects (e.g., a model's vulnerability to underspecification issues,

    such as shortcut learning). The result has been a general reluctance

    among network operators to deploy ML/AI-based solutions in their

    networks.

     

    In this talk, I will highlight our efforts to bridge this trust gap by

    arguing for and developing a novel closed-loop ML workflow that

    replaces the commonly used standard ML pipeline. Instead of focusing

    solely on the model's performance and requiring the selection of the

    "right" data upfront, our newly proposed ML pipeline emphasizes an

    iterative approach to collecting the "right" training data guided by

    an in-depth understanding and analysis of the model's decision-making

    and its (in)ability to generalize In presenting the building blocks

    of our novel closed-loop ML pipeline for networking, I will discuss

    (1) Trustee: A global model explainability tool that helps

    identify underspecification issues in ML models; (2) netUnicorn: A

    data-collection platform that simplifies iteratively collecting the

    "right" data for any given learning problem from diverse network

    environments; and (3) PINOT: A suite of active and passive

    data-collection tools that facilitate transforming enterprise networks

    into scalable data-collection infrastructure. I will conclude the talk

    by discussing the potential for developing a community-wide

    infrastructure to support this closed-loop ML pipeline for developing

    generalizable ML/AI models as key ingredients for the future creation

    of deployment-ready ML/AI artifacts for networking.


    Bio:

    Arpit Gupta is an assistant professor in the computer science

    department at UCSB. His research focuses on building flexible,

    scalable, and trustworthy systems that solve real-world problems at

    the intersection of networking, security, and machine learning. He

    also develops systems that aid in characterizing and addressing

    digital inequity issues. He developed BQT, a tool to extract

    broadband plans offered by ISPs in the US; Trustee, a tool to

    explain decision-making of ML artifacts for networking; netUnicorn,

    a network data collection platform for machine learning

    applications; Sonata, a streaming network telemetry system; and SDX,

    an Internet routing control system His work on augmenting crowdsourced

    Internet measurement data using BQT received the Distinguished Paper

    Award at ACM IMC’22; Trustee received IETF/IRTF Applied Networking

    Research Award and Best Paper Award (honorable mention) at ACM

    CCS’22; SDX received the Internet2 Innovation Award, Best of Rest,

    Community Contribution Award USENIX NSDI’16, and the Best Paper

    Award at ACM SOSR’17. Arpit received his Ph.D. from Princeton

    University. He completed his master's degree at NC State University

    and a bachelor's degree at the Indian Institute of Technology,

    Roorkee, India.



  • Towards Evolving Operating System


    Speaker:

    Prof. Sanidhya Kashyap, Assistant Professor, EPFL

    Date:2024-01-10
    Time:14:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    In this talk, I will present our ongoing effort to dynamically specialize the OS kernel based on the application requirements. In the first part of the talk, I will propose a new synchronization paradigm, contextual concurrency control (C3), that enables applications to tune

    concurrency control in the kernel. C3 allows developers to change the behavior and parameters of kernel locks, switch between different lock implementations, and dynamically profile one or multiple locks for a specific scenario of interest. This approach opens up a plethora of opportunities to fine-tune concurrency control mechanisms on the fly.

     

    In the later part, I will present a new approach to designing a storage stack that allows file system developers to design userspace file systems without compromising file system security guarantees while at the same time ensuring direct access to non-volatile memory (NVM) hardware. I will present a new file system architecture called Trio that decouples file system design, access control, and metadata integrity enforcement. The key insight is that other state (i.e., auxiliary state) in a file system can be regenerated from its “ground truth” state (i.e., core state). This approach can pave the way for providing a clean structure to design file systems.


    Bio:

    Sanidhya Kashyap is a systems researcher and an Assistant Professor at the School of Computer and Communication Sciences at EPFL. His research focuses on designing robust and scalable systems software, such as operating systems, file systems, and system security. He has published in top-tier systems conferences (SOSP, OSDI, ASPLOS, ATC, and EuroSys) and security conferences (CCS, IEEE S&P, and USENIX Security). He is the recipient of the VMware Early Career Faculty Award. He received his Ph.D. degree from Georgia Tech in 2020.



  • Towards Evolving Operating Systems


    Speaker:

    Sanidhya Kashyap, Assistant Professor, EPFL

    Date:2024-01-10
    Time:14:30:00 (IST)
    Venue:Bharti-501
    Abstract:

    In this talk, I will present our ongoing effort to dynamically specialize the OS kernel based on the application requirements.

    In the first part of the talk, I will propose a new synchronization paradigm, contextual concurrency control (C3), that enables applications to tune concurrency control in the kernel. C3 allows developers to change the behavior and parameters of kernel locks, switch between different lock implementations, and dynamically profile one or multiple locks for a specific scenario of interest. This approach opens up a plethora of opportunities to fine-tune concurrency control mechanisms on the fly.

    In the later part, I will present a new approach to designing a storage stack that allows file system developers to design userspace file systems without compromising file system security guarantees while at the same time ensuring direct access to non-volatile memory (NVM) hardware. I will present a new file system architecture called Trio that decouples file system design, access control, and metadata integrity enforcement. The key insight is that other state (i.e., auxiliary state) in a file system can be regenerated from its “ground truth” state (i.e., core state). This approach can pave the way for providing a clean structure to design file systems.


    Bio:

    Sanidhya Kashyap is a systems researcher and an Assistant Professor at the School of Computer and Communication Sciences at EPFL. His research focuses on designing robust and scalable systems software, such as operating systems, file systems, and system security. He has published in top-tier systems conferences (SOSP, OSDI, ASPLOS, ATC, and EuroSys) and security
    conferences (CCS, IEEE S&P, and USENIX Security). He is the recipient of the VMware Early Career Faculty Award. He received his Ph.D. degree from Georgia Tech in 2020.



  • Memory as a lens to understand efficient learning and optimization


    Speaker:

    Dr. Vatsal Sharan (Univ. Southern California) 

    Date:2024-01-02
    Time:12:00:00 (IST)
    Venue:#404, Bharti Building
    Abstract:

    What is the role of memory in learning and optimization? The optimal convergence rates (measures in terms of the number of oracle queries or samples needed) for various optimization problems are achieved by computationally expensive optimization techniques, such as second-order methods and cutting-plane methods. We will discuss if simpler, faster and memory-limited algorithms such as gradient descent can achieve these optimal convergence rates for the prototypical optimization problem of minimizing a convex function with access to a gradient or a stochastic gradient oracle. Our results hint at a perhaps curious dichotomy---it is not possible to significantly improve on the convergence rate of known memory efficient techniques (which are linear-memory variants of gradient descent for many of these problems) without using substantially more memory (quadratic memory for many of these problems). Therefore memory could be a useful discerning factor to provide a clear separation between 'efficient' and 'expensive' techniques. Finally, we also discuss how exploring the landscape of memory-limited optimization sheds light on new problem structures where it is possible to circumvent our lower bounds, and suggests new variants of gradient descent.


    Bio:

    Vatsal Sharan is an assistant professor in the CS department at the University of Southern California. He did his undergraduate at IIT Kanpur, PhD at Stanford and a postdoc at MIT. He is interested in the foundations of machine learning, particularly in questions of computational & statistical efficiency, fairness and robustness. 




2023 talks

  • Towards More Responsible Data-Driven Systems


    Speaker:

    Dr. Suraj Shetiya

    Date:2023-12-11
    Time:10:00:00 (IST)
    Venue:MS Teams
    Abstract:

    Improper management of data brings tremendous harm to society. Explain-ability and interpretability are an important part of responsible data management. In this talk, I present some of my recent research on fairness and responsible data management, with emphasis on how these impact systems. More specifically, the talk deep dives into integration of fairness into range queries in databases. In the later parts of the talk, we will look at interpretability of user preferences in multi-criteria decision systems and its practical applications. I will end the talk with outlining my future research and teaching plan.


    Bio:

    Suraj Shetiya received his Bachelor of Engineering degree in Computer Science from Visvesvaraya Technological University, India, in 2010. After completing his Master's in Computer Science from University of Texas at Arlington in 2017, he started pursuing doctoral research under the supervision of Dr. Gautam Das. During this time, he has served as Teaching Assistant in the CS Department in various courses from 2018 to 2023. He is the recipient of the STEM fellowship from 2017 to 2023. For his outstanding work as a PhD student, he has received Outstanding Doctoral Student and Outstanding Doctoral Dissertation from the Computer Science department. During his Ph.D., he has been co-author of 6+ peer-reviewed papers published in top-tier database conferences - SIGMOD, VLDB, ICDE
     


  • Towards More Responsible Data-Driven Systems


    Speaker:

    Suraj Shetiya (Online Link)

    https://teams.microsoft.com/l/meetup-join/19%3ameeting_NTliMGQ3YmUtMGQyZS00Y2NmLWE3NWEtYWQ4YThhYjYzOGIw%40thread.v2/0?context=%7b%22Tid%22%3a%22624d5c4b-45c5-4122-8cd0-44f0f84e945d%22%2c%22Oid%22%3a%22b071c84d-6396-4f94-a800-871037aba25d%22%7d

    Date:2023-12-11
    Time:10:00:00 (IST)
    Venue:online
    Abstract:

    Improper management of data brings tremendous harm to society. Explain-ability and interpretability are an important part of responsible data management. In this talk, I present some of my recent research on fairness and responsible data management, with emphasis on how these impact systems. More specifically, the talk deep dives into integration of fairness into range queries in databases. In the later parts of the talk, we will look at interpretability of user preferences in multi-criteria decision systems and its practical applications. I will end the talk with outlining my future research and teaching plan.


    Bio:

    Suraj Shetiya received his Bachelor of Engineering degree in Computer Science from Visvesvaraya Technological University, India, in 2010. After completing his Master's in Computer Science from University of Texas at Arlington in 2017, he started pursuing doctoral research under the supervision of Dr. Gautam Das. During this time, he has served as Teaching Assistant in the CS Department in various courses from 2018 to 2023. He is the recipient of the STEM fellowship from 2017 to 2023. For his outstanding work as a PhD student, he has received Outstanding Doctoral Student and Outstanding Doctoral Dissertation from the Computer Science department. During his Ph.D., he has been co-author of 6+ peer-reviewed papers published in top-tier database conferences - SIGMOD, VLDB, ICDE

     



  • Robust Autonomous Vehicle Localization using GPS: from Tandem Drifting Cars to "GPS" on the Moon


    Speaker:

    Prof. Grace Gao

    Date:2023-12-05
    Time:12:00:00 (IST)
    Venue:SIT #001
    Abstract:

    Autonomous vehicles often operate in complex environments with various sensing uncertainties. On Earth, GPS signals can be blocked or reflected by buildings; and camera measurements are susceptible to lighting conditions. While having a variety of sensors is beneficial, including more sensing information can introduce more sensing failures as well as more computational load. For space applications, such as localization on the Moon, it can be even more challenging. In this talk, I will present our recent research efforts on robust vehicle localization under sensing uncertainties. We turn sensing noise and even absence of sensing into useful navigational signals. Inspired by cognitive attention in humans, we select a subset of "attention landmarks" from sensing measurements to reduce computation load and provide robust positioning. I will also show our localization techniques that enable various applications, from autonomous tandem drifting cars to a GPS-like system for the Moon.


    Bio:

    Grace X. Gao is an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. She leads the Navigation and Autonomous Vehicles Laboratory (NAV Lab). Prof. Gao has won a number of awards, including the National Science Foundation CAREER Award, the Institute of Navigation Early Achievement Award and the RTCA William E. Jackson Award. Prof. Gao and her students won Best Presentation of the Session/Best Paper Awards 29 times at Institute of Navigation conferences over the past 17 years. She also won various teaching and advising awards, including the Illinois College of Engineering Everitt Award for Teaching Excellence, the Engineering Council Award for Excellence in Advising, AIAA Illinois Chapter's Teacher of the Year, and most recently Advisor of the Year Award and Teacher of the Year Award by AIAA Stanford Chapter in 2022 and 2023, respectively.



  • Robust Autonomous Vehicle Localization using GPS: from Tandem Drifting Cars to "GPS" on the Moon


    Speaker:

    Prof. Grace Gao

    Date:2023-12-05
    Time:12:00:00 (IST)
    Venue: SIT- 001
    Abstract:

    Autonomous vehicles often operate in complex environments with various sensing uncertainties. On Earth, GPS signals can be blocked or reflected by buildings; and camera measurements are susceptible to lighting conditions. While having a variety of sensors is beneficial, including more sensing information can introduce more sensing failures as well as more computational load. For space applications, such as localization on the Moon, it can be even more challenging. In this talk, I will present our recent research efforts on robust vehicle localization under sensing uncertainties. We turn sensing noise and even absence of sensing into useful navigational signals. Inspired by cognitive attention in humans, we select a subset of "attention landmarks" from sensing measurements to reduce computation load and provide robust positioning. I will also show our localization techniques that enable various applications, from autonomous tandem drifting cars to a GPS-like system for the Moon.

     


    Bio:

    Grace X. Gao is an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. She leads the Navigation and Autonomous Vehicles Laboratory (NAV Lab). Prof. Gao has won a number of awards, including the National Science Foundation CAREER Award, the Institute of Navigation Early Achievement Award and the RTCA William E. Jackson Award. Prof. Gao and her students won Best Presentation of the Session/Best Paper Awards 29 times at Institute of Navigation conferences over the past 17 years. She also won various teaching and advising awards, including the Illinois College of Engineering Everitt Award for Teaching Excellence, the Engineering Council Award for Excellence in Advising, AIAA Illinois Chapter's Teacher of the Year, and most recently Advisor of the Year Award and Teacher of the Year Award by AIAA Stanford Chapter in 2022 and 2023, respectively.



  • Formal Methods for Software Reliability and Synthesis


    Speaker:

    Ashish Mishra

    Date:2023-11-23
    Time:11:30:00 (IST)
    Venue:MS Teams
    Abstract:

    Building reliable software has been a classical goal in Computer Science. The most basic premise of my research is derived from this goal; Can we make programs safe and reliable using formal techniques while making programming as a discipline more democratic and accessible to the masses?

     

    In this talk, I will begin by highlighting some of these overarching research interests and directions.  I will primarily present two of my recent works highlighting the effective use of Refinement types, DSLs, and SMT-based techniques for the verification and synthesis of programs.

     

    (i) The first is a new specification-guided synthesis procedure that uses Hoare-style pre- and post-conditions to express fine-grained effects of potential library component candidates to drive a bi-directional synthesis search strategy. It integrates a conflict-driven learning procedure into the synthesis algorithm that provides a semantic characterization of previously encountered unsuccessful search paths used to prune possible candidates' space as synthesis proceeds.

     

    (ii) The second work is a new Refinement-Type system called Coverage Type which adapts the recent work in Incorrectness Logic to the specification and automated verification of test input generators used in modern property-based testing systems. Specifications are expressed in the language of refinement types, augmented with coverage types, types that reflect underapproximate constraints on program behavior.


    Bio:

    Ashish Mishra is a Postdoctoral Researcher at Purdue University, where he works with Suresh Jagannathan in the areas of Programming Languages, Program Verification, and Program Synthesis. Ashish obtained his Ph.D. from the Indian Institute of Science, where he worked under the supervision of Y. N. Srikant. In addition to his work in Computer Science, Ashish is also interested in applying technology to public policies and solving social problems. He is currently involved with several Indian NGOs such as PARI (People's Archive for Rural India), Mosali (a startup trying to bring women into workforce), and others that are involved in Media Monitoring and Research.

     



  • Stochastic Window Mean-payoff Games


    Speaker:

    Shibashis Guha, TIFR Bombay

    Date:2023-11-22
    Time:12:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Stochastic two-player games model systems with an environment that is both adversarial and stochastic. The environment is modeled by a player (Player 2) who tries to prevent the system (Player 1) from achieving its objective. We consider finitary versions of the traditional mean-payoff objective, replacing the long-run average of the payoffs by payoff average computed over a finite sliding window. Two variants have been considered: in one variant, the maximum window length is fixed and given, while in the other, it is not fixed but is required to be bounded. For both variants, we present complexity bounds and algorithmic solutions for computing strategies for Player 1 to ensure that the objective is satisfied with positive probability, with probability 1, or with probability at least p, regardless of the strategy of Player 2. The solution crucially relies on a reduction to the special case of non-stochastic two-player games. We give a general characterization of prefix-independent objectives for which this reduction holds. The memory requirement for both players in stochastic games is also the same as in non-stochastic games by our reduction. Moreover, for non-stochastic games, we improve upon the upper bound for the memory requirement of Player 1 and upon the lower bound for the memory requirement of Player 2.

    This is a joint work with Laurent Doyen and Pranshu Gaba.


    Bio:

    Shibashis Guha (https://www.tifr.res.in/~shibashis.guha/)



  • Reasoning with interactive guidance


    Speaker:

    Dr.Niket Tandon

    Date:2023-11-21
    Time:11:00:00 (IST)
    Venue: SIT- 001
    Abstract:

     Humans view AI as a tool that listens and learns from their interactions, but this differs from the standard train-test paradigm. The goal of this talk is to introduce a step towards bridging this gap by enabling large language models to focus on human needs and continuously learn. Drawing inspiration from the theory of recursive reminding in Psychology, we propose a memory architecture to guide models to avoid repeating past errors. The talk discusses four essential research questions: who to ask, what to ask, when to ask and how to apply the obtained guidance. In tasks such as moral reasoning, planning, and other reasoning as well as benchmark tasks, our approach enables models to improve with reflection. 

     


    Bio:

    Niket Tandon is a Senior Research Scientist at the Allen Institute for AI in Seattle. His research interests are in commonsense reasoning and natural language guided reasoning. He works at the Aristo team responsible for creating AI which aced science exams. He obtained his Ph.D. from the Max Planck Institute for Informatics in Germany in 2016, where he was supervised by Professor Gerhard Weikum, resulting in the largest automatically extracted commonsense knowledge base at the time, called WebChild. He is also the founder of PQRS research, which introduces undergraduate students from underrepresented institutes to AI research. More information from him is available here: https://niket.tandon.info/



  • Online List Labeling: Leveraging Predictions for Data Structures


    Speaker:

     Dr. Shikha Singh

    Date:2023-11-06
    Time:04:00:00 (IST)
    Venue:Bharti 501
    Abstract:

     A growing line of work shows how learned predictions can be used to break through worst-cast barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This talk describes recent results on how predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Θ(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled).


    Bio:

     Shikha Singh is an Assistant Professor of Computer Science at Williams College.  She obtained her PhD in Computer Science from Stony Brook University in 2018 and her Integrated MSc. in Mathematics and Computing from Indian Institute of Technology Kharagpur in 2013. Shikha's research is in the area of algorithms, with a focus on cache-efficient and scalable data structures, as well as algorithmic game theory.



  • Training with Talk: A Machine Learning Makeover


    Speaker:

    Shashank Srivastava

    Date:2023-10-19
    Time:12:00:00 (IST)
    Venue: SIT- 001
    Abstract:

    Language and learning are deeply intertwined in humans. For example, in schools, we rely on processes such as reading books, listening to lectures, and engaging in student-teacher dialogs. In this talk, we will explore some recent work on building automated learning systems that can learn new tasks through natural language interactions with their users. We will cover multiple scenarios in this general direction: learning classifiers from language-based supervision, learning web-based tasks from explained demonstrations; and investigating when pretraining on language imparts effective inductive biases to large language models.


    Bio:

    Shashank Srivastava is an assistant professor in the Computer Science department at the University of North Carolina (UNC) Chapel Hill. Shashank received his PhD from the Machine Learning department at CMU in 2018, and was an AI Resident at Microsoft Research in 2018-19. Shashank's research interests lie in conversational AI, interactive machine learning and grounded language understanding. Shashank has an undergraduate degree in Computer Science from IIT Kanpur, and a Master's degree in Language Technologies from CMU. He received the Yahoo InMind Fellowship for 2016-17. His research has been covered by popular media outlets including GeekWire and New Scientist.

     



  • Socially aware Natural Language Processing


    Speaker:

    Snigdha Chaturvedi 

    Date:2023-10-18
    Time:12:00:00 (IST)
    Venue: SIT- 001
    Abstract:

    NLP systems have made tremendous progress in recent years but lack a human-like language understanding. This is because there is a deep connection between language and people-- most text is created by people and for people. Despite this strong connection between language and people, existing NLP systems remain incognizant of the social aspects of language. In this talk, I describe three ways of designing Socially aware NLP systems. In the first part of the talk, I describe our socially aware approach to story generation by incorporating social relationships between various people to be mentioned in the story. We use a latent-variable-based approach that generates the story by conditioning on relationships to be exhibited in the story text using the latent variable. This latent variable-based design results in a  better and explainable generation process. In the second part of the talk, I briefly describe our work on uncovering inherent social bias in automatically generated stories. We use a commonsense engine to reveal how such stories learn and amplify implicit social biases, especially gender biases. In the last part of the talk, I discuss methods to alleviate social biases. Specifically, I discuss debiasing text representations grounded in information theory. Using the rate-distortion function we show how we can remove information about sensitive attributes like race or gender from pre-trained text representations. This approach can successfully remove undesirable information while being robust to non-linear probing attacks.


    Bio:

    Snigdha Chaturvedi is an Assistant Professor of Computer Science at the University of North Carolina, Chapel Hill. She specializes in Natural Language Processing, emphasizing narrative-like and socially aware understanding, summarization, and generation of language. Previously, she was an Assistant Professor at UC-Santa Cruz, and a postdoctoral fellow at UIUC and UPenn working with Dan Roth. She earned her Ph.D. in Computer Science from UMD in 2016, where she was advised by Hal Daume III. Her research has been supported by NSF, Amazon, and IBM.



  • Securing Processors against Side-Channel Attacks: CPU Caches, Schedulers, and Beyond!


    Speaker:

    Prof. Gururaj Saileshwar

    Date:2023-10-13
    Time:11:00:00 (IST)
    Venue: SIT- 001
    Abstract:

    In recent years, micro-architectural side-channel attacks have emerged

    as a unique and potent threat to security and privacy. Identifying these

    side-channels is difficult as they often originate from undocumented

    hardware structures, which are hidden from the software. Moreover, their

    root-cause lies in crucial hardware performance optimizations, making

    low overhead mitigation challenging. This talk will focus on both

    discovery of new attacks and new low-cost defenses.

     


    Bio:

    Gururaj Saileshwar is an Assistant Professor at the University of

    Toronto, Dept of Computer Science. His research bridges computer

    architecture and systems security, with interests including

    micro-architectural side-channels, DRAM Rowhammer attacks, and trusted

    execution environments. His past work has received an IEEE HPCA Best

    Paper Award, an IEEE Micro Top Picks Honorable Mention, and his PhD

    dissertation has been recognized with an IEEE HOST Best PhD Dissertation

    Award and an IEEE TCCA / ACM SIGARCH Best Dissertation Award Honorable

    Mention. His work appears in computer architecture conferences like

    ASPLOS, MICRO, HPCA, and ISCA, and security conferences like USENIX

    Security, IEEE S&P and CCS



  • Computational modeling of neuronal dynamics


    Speaker:

    Parul Verma

    Date:2023-10-06
    Time:09:30:00 (IST)
    Venue:Online (MS Teams Link)
    Abstract:

    Understanding neuronal dynamics, and how they are affected in neurological disorders, is one of the key problems in neuroscience today. This talk will describe advances in theoretical and biophysically grounded tools to understand neuronal mechanisms, with a focus on the functional activity of the entire brain. Specifically, it will demonstrate a graph-based mathematical model that captures the spectral and spatial features of the brain’s functional activity. This modeling approach revealed biophysical alterations in Alzheimer’s disease, different stages of sleep, and spontaneous fluctuations in electrophysiological functional activity. Together, these results aim to highlight the importance of such modeling techniques in identifying the underlying biophysical mechanisms of neuronal dynamics, which can be intractable to infer using neuroimaging data alone.


    Bio:

    Parul Verma is a postdoc at the University of California San Francisco, Department of Radiology, since 2020. She obtained her Ph.D. at Purdue University. Before that, she obtained her B.Tech in Chemical Engineering from IIT Bombay. Parul’s doctoral work has been recognized by a faculty lectureship award from Purdue Chemical Engineering, and her postdoctoral work has been awarded a fellowship by the Alzheimer’s association.

     



  • Computational modeling of neuronal dynamics


    Speaker:

    Parul Verma

    Date:2023-10-06
    Time:09:30:00 (IST)
    Venue:MS Teams
    Abstract:

    Understanding neuronal dynamics, and how they are affected in neurological disorders, is one of the key problems in neuroscience today. This talk will describe advances in theoretical and biophysically grounded tools to understand neuronal mechanisms, with a focus on the functional activity of the entire brain. Specifically, it will demonstrate a graph-based mathematical model that captures the spectral and spatial features of the brain’s functional activity. This modeling approach revealed biophysical alterations in Alzheimer’s disease, different stages of sleep, and spontaneous fluctuations in electrophysiological functional activity. Together, these results aim to highlight the importance of such modeling techniques in identifying the underlying biophysical mechanisms of neuronal dynamics, which can be intractable to infer using neuroimaging data alone. 


    Bio:

    Parul Verma is a postdoc at the University of California San Francisco, Department of Radiology, since 2020. She obtained her Ph.D. at Purdue University. Before that, she obtained her B.Tech in Chemical Engineering from IIT Bombay. Parul’s doctoral work has been recognized by a faculty lectureship award from Purdue Chemical Engineering, and her postdoctoral work has been awarded a fellowship by the Alzheimer’s association. 

     



  • Deep Sensing: Jointly Optimizing Imaging and Processing


    Speaker:

    Dr. Sudhakar Kumawat,

    Date:2023-10-03
    Time:11:00:00 (IST)
    Venue:#001, SIT Building
    Abstract:

    In this seminar, I will talk about the area of deep sensing where we jointly optimize the imaging (camera) parameters along with the deep learning models for novel computer vision applications. I will begin by discussing our recently published work "Action Recognition From a Single-Coded Image" where I will present our proposed framework for recognizing human actions directly from coded exposure images, without reconstructing the original scene. Next, I will talk about deep sensing in a broader context, discussing the motivation behind pursuing this research area, key ideas, and its application to existing and novel vision applications. Finally, I will briefly discuss how we are using deep sensing for a novel computer vision application called "Multimodal Material Segmentation in Road Scene Images".


    Bio:

    Sudhakar Kumawat is a post-doctoral fellow at the Institute of Datability, Osaka University, Japan. He received his PhD from IIT Gandhinagar under the supervision of Dr. Shanmuganathan Raman. He was a TCS research fellow during PhD. Before that, he received his Integrated Dual Degree (B.Tech+M.Tech, 5 years) from the Computer Science and Engineering Department, IIT (BHU) Varanasi, in 2014. His broad area of research is computer vision, with a special interest in privacy-preserving computer vision, compressive sensing, and domain generalization. He has published papers in top computer vision journals and conferences such as TPAMI, ECCV, CVPR, and ICASSP. He received the best paper runner-up award at NCVPRIPG 2019.

     



  • The Story of AWS Glue (VLDB 2023)


    Speaker:

    Mohit Saxena

    https://www.amazon.science/publications/the-story-of-aws-glue

    Date:2023-09-27
    Time:12:00:00 (IST)
    Venue:SIT-001
    Abstract:

    AWS Glue is Amazon's serverless data integration cloud service that makes it simple and cost effective to extract, clean, enrich, load, and organize data. Originally launched in August 2017, AWS Glue began as an extract-transform-load (ETL) service designed to relieve developers and data engineers of the undifferentiated heavy lifting needed to load databases, data warehouses, and build data lakes on Amazon S3. Since then, it has evolved to serve a larger audience including ETL specialists and data scientists, and includes a broader suite of data integration capabilities. Today, hundreds of thousands of customers use AWS Glue every month.


    Bio:

    Mohit Saxena is Senior Manager at Amazon Web Services. He leads the team that manages the serverless data integration service at Amazon globally. Earlier, he worked at IBM Research - Almaden and focused on database and storage systems. He completed his Ph.D in Computer Sciences from University of Wisconsin-Madison, M.S. from Purdue University and B.Tech in Computer Sciences from Indian Institute of Technology-Delhi.



  • Bayesian spatiotemporal regression approaches for modelling and understanding the drivers of childhood vaccination outcomes


    Speaker:

    Sumeet Agarwal

     

    Online (MS Teams): https://teams.microsoft.com/l/meetup-join/19%3a859e0622905d4a7980e595706e31fa0d%40thread.tacv2/1694307532808?context=%7b%22Tid%22%3a%22624d5c4b-45c5-4122-8cd0-44f0f84e945d%22%2c%22Oid%22%3a%22d147ea6a-9288-4db2-9f47-d243d61e426a%22%7d

    Date:2023-09-12
    Time:12:00:00 (IST)
    Venue:#001, SIT Building
    Abstract:

    Incomplete immunisation coverage causes preventable illness and death in both developing and developed countries. Identification of factors that might modulate coverage could inform effective immunisation programmes and policies. We construct performance indicators to quantitatively approximate measures of the susceptibility of immunisation programmes to coverage losses, with an aim to identify correlations between trends in vaccine coverage and socioeconomic factors. We undertook a data-driven time-series analysis to examine trends in coverage of diphtheria, tetanus, and pertussis (DTP) vaccination across 190 countries over the past 30 years. We grouped countries into six world regions and used Gaussian process regression to forecast future coverage rates and provide a vaccine performance index: a summary measure of the strength of immunisation coverage in a country. Our vaccine performance index highlighted countries at risk of failing to achieve the global target of 90% coverage by 2015, and could aid policy makers' assessments of the strength and resilience of immunisation programmes. Subsequently, we have undertaken more localised analyses of vaccination coverage and confidence for India, including the development of novel latent-variable Bayesian hierarchical approaches for the inference of unobserved behavioural and social drivers of vaccination; we also discuss some outcomes from this ongoing work.


    Bio:

    Sumeet Agarwal teaches in the areas of Electrical Engineering, Artificial Intelligence, and Cognitive Science at IIT Delhi. His research interests are focused around the use of machine learning and statistical modelling techniques to better understand the structure, function, and evolution of complex systems, in both the biological and the social sciences.

     



  • Architectural Insights for Robustness and Fairness in Machine Learning


    Speaker:

    Professor Upamanyu Madhow

    Date:2023-09-05
    Time:12:00:00 (IST)
    Venue:Bharti 501
    Abstract:

    As data-driven machine-learnt algorithms become the technology of choice in an increasing array of applications, the research community recognizes the urgency of addressing shortcomings such as the lack of robustness (e.g., against adversarial examples and distribution shifts) and fairness (e.g., caused by bias in the training data).  In this talk, we present two architectural insights, each based on a shift of perspective from the state of the art.

    1) Software Architecture: We view the standard end-to-end paradigm for training DNNs, which does not provide explicit control over the features extracted by intermediate layers, as a fundamental bottleneck in the design of robust, interpretable DNNs. Motivated by ideas from communication theory (processing with matched filters) and neuroscience (neuronal competition), we propose adapting the training and inference framework for DNNs to provide more direct control over the shape of activations in intermediate layers. Preliminary results for the CiFAR-10 image database indicate significant gains in general-purpose robustness against noise and common corruptions, as well as against adversarial perturbations.  We hope these results motivate further theoretical and experimental investigations: variants of the ideas we propose apply, in principle, to any DNN architecture or training model (supervised, unsupervised, self-supervised, semi-supervised).

    2) Social Architecture: We view unfairness in DNNs resulting from data bias as a symptom of the unfairness and bias in the society from which the data is extracted.  In an approach that is complementary to existing research on enhancing fairness during training and inference, we propose a framework for sequential decision-making aimed at dynamically influencing long-term societal fairness via positive feedback.  We illustrate our ideas via a problem of selecting applicants from a pool consisting of two groups, one of which is under-represented, and hope that our results stimulate the collaboration between policymakers, social scientists and machine learning researchers required for real-world impact.


    Bio:

    Upamanyu Madhow is Distinguished Professor of Electrical and Computer Engineering at the University of California, Santa Barbara.  His current research interests focus on next generation communication, sensing and inference infrastructures, with emphasis on millimeter wave systems, and on fundamentals and applications of robust machine learning. Dr. Madhow is a recipient of the 1996 NSF CAREER award, co-recipient of the 2012 IEEE Marconi prize paper award in wireless communications, and recipient of a 2018 Distinguished Alumni award from the ECE Department at the University of Illinois, Urbana-Champaign. He is the author of two textbooks published by Cambridge University Press, Fundamentals of Digital Communication (2008) and Introduction to Communication Systems (2014).  Prof. Madhow is co-inventor on 32 US patents, and has been closely involved in technology transfer of his research through several start-up companies, including ShadowMaps, a software-only approach to GPS location improvement which was deployed worldwide by Uber.



  • Architectural Insights for Robustness and Fairness in Machine Learning


    Speaker:

    Professor Upamanyu Madhow , Department of Electrical & Computer Engineering, University of California, Santa Barbara

    Date:2023-09-05
    Time:12:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    As data-driven machine-learnt algorithms become the technology of choice in an increasing array of applications, the research community recognizes the urgency of addressing shortcomings such as the lack of robustness (e.g., against adversarial examples and distribution shifts) and fairness (e.g., caused by bias in the training data). In this talk, we present two architectural insights, each based on a shift of perspective from the state of the art.

    1) Software Architecture: We view the standard end-to-end paradigm for training DNNs, which does not provide explicit control over the features extracted by intermediate layers, as a fundamental bottleneck in the design of robust, interpretable DNNs. Motivated by ideas from communication theory (processing with matched filters) and neuroscience (neuronal competition), we propose adapting the training and inference framework for DNNs to provide more direct control over the shape of activations in intermediate layers. Preliminary results for the CiFAR-10 image database indicate significant gains in general-purpose robustness against noise and common corruptions, as well as against adversarial perturbations. We hope these results motivate further theoretical and experimental investigations: variants of the ideas we propose apply, in principle, to any DNN architecture or training model (supervised, unsupervised, self-supervised, semi-supervised).

    2) Social Architecture: We view unfairness in DNNs resulting from data bias as a symptom of the unfairness and bias in the society from which the data is extracted. In an approach that is complementary to existing research on enhancing fairness during training and inference, we propose a framework for sequential decision-making aimed at dynamically influencing long-term societal fairness via positive feedback. We illustrate our ideas via a problem of selecting applicants from a pool consisting of two groups, one of which is under-represented, and hope that our results stimulate the collaboration between policymakers, social scientists and machine learning researchers required for real-world impact.


    Bio:

    Upamanyu Madhow is Distinguished Professor of Electrical and Computer Engineering at the University of California, Santa Barbara.  His current research interests focus on next generation communication, sensing and inference infrastructures, with emphasis on millimeter wave systems, and on fundamentals and applications of robust machine learning. Dr. Madhow is a recipient of the 1996 NSF CAREER award, co-recipient of the 2012 IEEE Marconi prize paper award in wireless communications, and recipient of a 2018 Distinguished Alumni award from the ECE Department at the University of Illinois, Urbana-Champaign. He is the author of two textbooks published by Cambridge University Press, Fundamentals of Digital Communication (2008) and Introduction to Communication Systems (2014).  Prof. Madhow is co-inventor on 32 US patents, and has been closely involved in technology transfer of his research through several start-up companies, including ShadowMaps, a software-only approach to GPS location improvement which was deployed worldwide by Uber.



  • Automated Decision Making for Safety Critical Applications


    Speaker:

    Prof. Mykel Kochenderfer (Stanford University)

    Date:2023-09-04
    Time:16:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Building robust decision making systems for autonomous systems is challenging. Decisions must be made based on imperfect information about the environment and with uncertainty about how the environment will evolve. In addition, these systems must carefully balance safety with other considerations, such as operational efficiency. Typically, the space of edge cases is vast, placing a large burden on human designers to anticipate problem scenarios and develop ways to resolve them. This talk discusses major challenges associated with ensuring computational tractability and establishing trust that our systems will behave correctly when deployed in the real world. We will outline some methodologies for addressing these challenges and point to some research applications that can serve as inspiration for building safer systems.


    Bio:

    Mykel Kochenderfer is an Associate Professor of Aeronautics and Astronautics at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and automated driving where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Research at SISL focuses on efficient computational methods for deriving optimal decision strategies from high-dimensional, probabilistic problem representations. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on aircraft collision avoidance, leading to the creation of the ACAS X international standard for manned and unmanned aircraft. Prof. Kochenderfer is a co-director of the Center for AI Safety. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty: Theory and Application (MIT Press, 2015), Algorithms for Optimization (MIT Press, 2019), and Algorithms for Decision Making (MIT Press, 2022).



  • Deep Sensing: Jointly Optimizing Imaging and Processing


    Speaker:

    Dr. Sudhakar Kumawat

    Date:2023-09-03
    Time:11:00:00 (IST)
    Venue: SIT- 001
    Abstract:

    In this seminar, I will talk about the area of deep sensing where we jointly optimize the imaging (camera) parameters along with the deep learning models for novel computer vision applications. I will begin by discussing our recently published work "Action Recognition From a Single-Coded Image" where I will present our proposed framework for recognizing human actions directly from coded exposure images, without reconstructing the original scene. Next, I will talk about deep sensing in a broader context, discussing the motivation behind pursuing this research area, key ideas, and its application to existing and novel vision applications. Finally, I will briefly discuss how we are using deep sensing for a novel computer vision application called "Multimodal Material Segmentation in Road Scene Images".


    Bio:

    Sudhakar Kumawat is a post-doctoral fellow at the Institute of Datability, Osaka University, Japan. He received his PhD from IIT Gandhinagar under the supervision of Dr. Shanmuganathan Raman. He was a TCS research fellow during PhD. Before that, he received his Integrated Dual Degree (B.Tech+M.Tech, 5 years) from the Computer Science and Engineering Department, IIT (BHU) Varanasi, in 2014. His broad area of research is computer vision, with a special interest in privacy-preserving computer vision,  compressive sensing, and domain generalization. He has published papers in top computer vision journals and conferences such as TPAMI, ECCV, CVPR, and ICASSP. He received the best paper runner-up award at NCVPRIPG 2019.



  • Gen-AI, its applications and end to end Development Life Cycle of AI solutions


    Speaker:

    Anupam Purwar

    Date:2023-09-01
    Time:16:00:00 (IST)
    Venue:Bharti 501
    Abstract:

    Generative AI in the natural language space is showing tremendous potential in automating various routine jobs. Recent studies have also demonstrated that Gen AI can aid with creative content creations. At the centre of this innovation in Gen AI are Large Language Models (LLMs), the leading ones are GPT 4, Claude2 and Llama 2 etc. Many of these LLMs are commercial, but there are open source ones too which can help organizations unlock tremendous value and help innovate. Through this talk, I would provide a practical way to develop an end to end application using LLMs in a scalable and affordable way. Speaker would cover software development life cycle for Generative AI solutions along with problem statement definition to help budding AI engineers, AI researchers and product managers alike.


    Bio:

    Anupam Purwar is a Senior Research Scientist at Amazon Development Centre India, Hyderabad. He is a leading development of data science and machine learning-based solutions for Amazon Global Fulfillment. Anupam holds an MBA in Finance and Information Systems from the Indian School of Business and a Bachelor of Engineering from Birla Institute of Technology and Science, Pilani. He received  merit scholarship and graduated among Top 5% in class both at ISB and BITS-Pilani. Prior to this, Anupam worked as a Research Scientist at Indian Institute of Science (IISc). At IISc, he was part of a multi-institutional effort which included IITs and ISRO to develop novel structures. Besides, he has authored 20+ peer reviewed articles pertaining to Machine Learning, IoT, computational design with 200+ citations and received multiple best paper awards. He is a certified Machine learning professional with 8+ certifications from Google and AWS. 



  • Fusing AI and Formal Methods for Automated Synthesis


    Speaker:

    Priyanka Golia

    Date:2023-08-28
    Time:12:00:00 (IST)
    Venue:MS Teams
    Abstract:

    We entrust large parts of our daily lives to computer systems, which are becoming increasingly more complex. Developing scalable yet trustworthy techniques for designing and verifying such systems is an important problem. In this talk, our focus will be on automated synthesis,  a technique that uses formal specifications to automatically generate systems (such as functions, programs, or circuits) that provably satisfy the requirements of the specification.  I will introduce a state-of-the-art functional synthesis algorithm that leverages artificial intelligence to provide an initial guess for the system and then uses formal methods to repair and verify the guess to synthesize a system that is correct by construction. I will conclude by exploring the potential for combining AI and formal methods to address real-world scenarios.


    Bio:

    Priyanka Golia has completed her Ph.D. in the joint degree program of  NUS, Singapore and IIT Kanpur, India.  Her research interests lie at the intersection of formal methods and artificial intelligence. In particular, her dissertation work has focused on designing scalable automated synthesis and testing techniques.

    Her work has been awarded Best Paper Nomination at ICCAD-21 and Best Paper Candidate at DATE-23.  She was named one of the EECS Rising Stars in 2022. She has co-presented a tutorial on Automated Synthesis: Towards the Holy Grail of AI at AAAI-22 and IJCAI-22, and she is co-authoring an upcoming book (on invitation from NOW publishers) on functional synthesis.
     


  • "Fast Multivariate Multipoint Evaluation over Finite Fields"


    Speaker:

     Dr. Sumanta Ghosh (Caltech) Online Talk

    https://teams.microsoft.com/l/meetup-join/19%3ac00a05b5843f4486843ed7ca9c863eeb%40thread.tacv2/1692427565207?context=%7b%22Tid%22%3a%22624d5c4b-45c5-4122-8cd0-44f0f84e945d%22%2c%22Oid%22%3a%22870c4f19-6710-453d-a17d-f35f49b733e1%22%7d

    Date:2023-08-24
    Time:09:30:00 (IST)
    Abstract:

    Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. The straightforward algorithm for this problem is to iteratively evaluate the input polynomial at each input point. The question of obtaining faster-than-naive (ideally, linear time) algorithms for this problem is a natural and fundamental question in computational algebra. Besides, fast algorithms for this problem are closely related to fast algorithms for other natural algebraic questions like polynomial factorization and modular composition.

     Nearly linear time algorithms have been known for the univariate instance of multipoint evaluation for close to five decades due to the work of Borodin and Moenck. However, fast algorithms for the multivariate version have been much harder to come by. In a significant improvement to the state of art for this problem, Umans in 2008 and Kedlaya-Umans in 2011 gave nearly linear time algorithms for this problem over field of small characteristic and over all finite fields respectively, provided that the number of variables m is at most d^{o(1)} where the degree of the input polynomial in every variable is less than d. They also stated the question of designing fast algorithms for the large variable case as an open problem.

     In this talk, we present two new algorithms for this problem. The first one is a nearly linear time (algebraic) algorithm for not-too-large fields of small characteristics. For the large variable case, this is the first nearly linear time algorithm for this problem over any large enough field. The second gives a nearly linear time (non-algebraic) algorithm over all finite fields.

     The talk is based on joint work with Vishwas Bhargava, Zeyu Guo, Mrinal Kumar, Chandra Kanta Mohapatra, and Chris Umans.


    Bio:

    Dr. Sumanta Ghosh (Caltech)



  • Text image analysis from low resource dataset


    Speaker:

    Prof. Partha PratimRoy, IIT Roorkee

    Date:2023-08-23
    Time:12:00:00 (IST)
    Venue:#501, Bharti building
    Abstract:

    Text image understanding has long been an active research area because of its complexity and challenges due to a variety of shapes. For bench-marking such a system, the dataset is a necessary and important resource to develop. The deep learning-based text image analytics tasks, such as detection and recognition have shown impressive results under the setting of full supervision of completely labeled datasets. However, the creation of such datasets with a large volume of samples is a challenging and time-consuming task. This research presentation will highlight a few solutions towards effective analysis of textual image analysis in scarcity of data annotation.

    It is observed that the performance of the generic scene text detection method drops significantly due to the partial annotation of training data which introduces unnecessary noise. We propose a text region refinement method that provides robustness against the partially annotated training data in scene text detection. This approach works as a two-tier scheme. In the first tier text-probable regions apart from ground-truth text are obtained by applying hybrid loss. Next, these text-probable regions generate pseudo-labels to refine annotated text regions in the second tier during training. The proposed method exhibits a significant improvement over the baseline and existing approaches for the partially annotated training data.

    Besides, recognition of textual images is a difficult task sometimes as sufficient labeling of data is not available for some unexplored scripts, especially Indic scripts. The design of deep neural network models makes it necessary to extend training datasets in order to introduce unseen variations. We propose an Adversarial Feature DeformationModule (AFDM) that learns ways to elastically warp extracted features in a scalable manner. The AFDM is inserted between intermediate layers and trained alternatively with the original framework, boosting its capability to better learn highly informative features rather than trivial ones. We record results for varying sizes of training data and observe that our enhanced network generalizes much better in the low-data regime.


    Bio:

    Dr. Partha Pratim Roy (FIETE, SMIEEE) is presently working as an Associate Professor in the Department of Computer Science and Engineering, Indian Institute of Technology (IIT), Roorkee. He received his Masters in 2006 and Ph.D. in 2010 from Universitat Autonoma de Barcelona, Spain. He did postdoctoral stays in France and Canada from 2010 to 2013. Dr. Roy gathered industrial experience while working for about 4 years in TCS and Samsung. In Samsung, he was a part-leader of the Computer Vision research team. He is the recipient of the "Best Student Paper" awarded by the International Conference on Document Analysis and Recognition (ICDAR), 2009, Spain. He has published more than 200 research papers in various international journals, and conference proceedings. He has co-organized several international conferences and workshops, has been a member of the Program Committee of a number of international conferences, and acts as a reviewer for many journals in the field. His research interests include Pattern Recognition, Document Image Processing, Biometrics, and Human-Computer Interaction. He is presently serving as Associate Editor of ACM Transactions on Asian and Low-Resource Language Information Processing, Neurocomputing, IET Image Processing, IET Biometrics, IEICE Transactions on Information and Systems, and Springer Nature Computer Science.



  • Principled Reinforcement Learning to Model our Dynamic Environments


    Speaker:

    Prof Chandrajit Bajaj 

    Date:2023-08-11
    Time:12:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Can computers be programmed to learn to model progressive approximations of the underlying dynamical processes of specific environments, through interaction (i.e. spatio-temporal sensing) . We answer this in the affirmative with a proviso that not all the Hamiltonian models of environmental processes are learnable at optimal fidelity. Computers equipped with stable numerical solvers (some, possibly simultaneously learnable), are at the mercy of the noise and uncertainty of the sensed environmental observations. can nevertheless be programmed to stably train, cross-validate and test stochastic PDE (partial differential equation) neural operators. The learning is along optimally controlled pathways that satisfy a form of the Hamilton-Jacobi-Bellman equation. In this talk, I shall explain a framework of learning Hamiltonian models (Hamiltonians) as a partially observable controlled Markov decision process model (COMDP) and based on the Pontryagin's maximum principle. The COMDP model learning trajectory operates on a constrained manifold that satisfy the conservation laws of the underlying physics, via application of Noether's theorem. The COMDP includes learning dynamic stabilizing control satisfying learned Lyapunov functions for error bounded, convergent solutions, and additionally produces sparse approximations that avoid overfitting This talk shall show a few examples of such learned spatio-temporal models of dynamic environments, with various approximations of dynamic shape and function .

    This is joint work with my students Taemin Heo, Minh Nguyen, Yi Wan


    Bio:

    Chandrajit Bajaj, Department of Computer Science and Oden Institute, Center for Computational Visualization,University of Texas at Austin

    http://www.cs.utexas.edu/~bajaj

    bajaj@cs.utexas.edu, bajaj@oden.utexas.edu



  • A Quantum Revolution in Computation


    Speaker:

    Umesh Vazirani (UC Berkeley)

    Date:2023-07-31
    Time:03:30:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    We are well into the NISQ era of Noisy Intermediate Scale Quantum Computers. Four years on from Google's ‘quantum supremacy’ experiment, we have a deeper understanding of the nature of that experiment, the computing power of NISQ and novel techniques for benchmarking such computers and characterizing their error models. I will also describe how concepts from cryptography have provided novel and counter-intuitive ways of probing quantum systems and the prospects they hold for the next generation of quantum computers taking on the quantum supremacy challenge.


    Bio:

    Prof Umesh V. Vazirani is the Roger A. Strauch Professor of Electrical Engineering and Computer Science at the University of California, Berkeley, and the director of the Berkeley Quantum Computation Center. His research interests lie primarily in quantum computing. Vazirani is one of the founders of the field of quantum computing. His 1993 paper with his student Ethan Bernstein on quantum complexity theory defined a model of quantum Turing machines which was amenable to complexity based analysis. This paper also gave an algorithm for the quantum Fourier transform, which was then used by Peter Shor within a year in his celebrated quantum algorithm for factoring integers. With Charles Bennett, Ethan Bernstein, and Gilles Brassard, he showed that quantum computers cannot solve black-box search problems faster than O({sqrt {N}}) in the number of elements to be searched. This result shows that the Grover search algorithm is optimal. It also shows that quantum computers cannot solve NP-complete problems in polynomial time using only the certifier. He is also a co-author of a textbook on algorithms. Vazirani was awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and related problems (jointly with Satish Rao and Sanjeev Arora). In 2018, he was elected to the National Academy of Sciences.



  • Hardness of Testing Equivalence to Sparse Polynomials Under Shifts


    Speaker:

    Suryajith Chillara

    Date:2023-07-26
    Time:03:00:00 (IST)
    Venue:#001, SIT Building
    Abstract:

    We say that two given polynomials $f, g in R[x_1, ldots, x_n]$, over a ring $R$, are equivalent under shifts if there exists a vector $(a_1, ldots, a_n)in R^N$ such that $f(x_1+a_1, ldots, x_n+a_n) = g(x_1, ldots, x_n)$. This is a special variant of the polynomial projection problem in Algebraic Complexity Theory. That is, instead of being given two polynomials $f$ and $g$ as input as described before, we are just given a polynomial $f$ and a parameter $t$ and we are interested in studying a more general problem of testing equivalence of $f$ to any of the polynomials in $R[x_1, ldots, x_n]$ which have at most $t$ many monomials with non-zero coefficients.

    Grigoriev and Karpinski (FOCS 1990), Lakshman and Saunders (SIAM J. Computing, 1995), and Grigoriev and Lakshman (ISSAC 1995) studied the problem of testing polynomial equivalence of a given polynomial to any $t$-sparse polynomial, over the rational numbers, and gave exponential time algorithms. In the past two decades, these exponential time algorithms could not be improved and this is a major motivation behind our study of hardness of this problem.

    We show that $SparseShift_R$ is at least as hard as checking if a given system of polynomial equations over $R[x_1,ldots, x_n]$ has a solution (Hilbert's Nullstellensatz). We also study the gap versions of this problem and show NP-hardness for certain regime of parameters.

    Our results to some extent throws a light on why this problem in general has been evading the efforts to provide efficient algorithms.

    Joint work with Coral Grichener (Google, Israel) and Amir Shpilka (Tel Aviv University).

    Link: https://drops.dagstuhl.de/opus/volltexte/2023/17674/


    Bio:

    Suryajith Chillara (https://suryajith.github.io/) IIIT Hyderabad



  • Some results in the Intersection of Game Theory and Logic


    Speaker:

    Ramit Das

    Date:2023-06-27
    Time:16:00:00 (IST)
    Venue:Bharti Bldg. #404 + Team Links
    Abstract:

    We shall address the issues of modelling or formalising game theoretic properties like Nash Equilibrium, Finite Improvement Property, Weak Acyclicity of various game forms in various kinds of logic. We shall investigate the expressive powers offered by each logic, the model checking theorems and also a completeness proof of a decidable logic variant. We hope that this investigation would have an impact on the formalisation of game theory and its allied areas like computational social choice theory.


    Bio:

    Ramit Das is a researcher in Theoretical Computer Science about to get his PhD from the Institute of Mathematical Sciences, Chennai. He is interested in understanding the nature of computation in its various forms. For his PhD, he trained in Mathematical Logic and applied it to aspects of strategic games. His academic interests lie in trying to build bridges in Game Theory, Logic and areas of Complexity Theory like Descriptive Complexity Theory.



  • Approximate Model Counting: Is SAT Oracle More Powerful than NP Oracle?


    Speaker:

    Gunjan Kumar

    Date:2023-06-15
    Time:11:00:00 (IST)
    Venue:Bharti Bldg. #404 + Team Links
    Abstract:

    Given a Boolean formula $phi$ over $n$ variables, the problem of model counting is to compute the number of solutions of $phi$. Model counting is a fundamental problem in computer science with wide-ranging applications in domains such as quantified information leakage, probabilistic reasoning, network reliability, neural network verification, and more. Owing to the #P-hardness of the problems, Stockmeyer initiated the study of the complexity of approximate counting and showed that $log n$ calls to an NP oracle are necessary and sufficient to achieve tight guarantees.

    It is well known that an NP oracle does not fully capture the behavior of SAT solvers, as SAT solvers are also designed to provide satisfying assignments when a formula is satisfiable, without additional overhead. Accordingly, the notion of SAT oracle has been proposed to capture the behavior of SAT solver wherein given a Boolean formula, an SAT oracle returns a satisfying assignment if the formula is satisfiable or returns unsatisfiable otherwise.

    The primary contribution of this work is to study the relative power of the NP oracle and SAT oracle in the context of approximate model counting. We develop a new methodology to achieve the main result: a SAT oracle is no more powerful than an NP oracle in the context of approximate model counting.

    (To appear in ICALP 2023. Joint work with Diptarka Chakraborty, Sourav Chakraborty, and Kuldeep S Meel).


    Bio:

    Gunjan Kumar did his B.Tech in Computer Science and Engineering from the Indian Institute of Technology, Guwahati. Thereafter, he pursued his MS and Ph.D. from Tata Institute of Fundamental Research, Mumbai, and currently, he is a postdoctoral researcher at the National University of Singapore. His broad area of interest is Algorithms and Complexity with a focus on sublinear algorithms.



  • Measuring and Improving the Internal Conceptual Representations of Deep Learning


    Speaker:

    Ramakrishna Vedantam

    Online (MS Teams): https://teams.microsoft.com/l/meetup-join/19%3ac00a05b5843f4486843ed7ca9c863eeb%40thread.tacv2/1685675828429?context=%7b%22Tid%22%3a%22624d5c4b-45c5-4122-8cd0-44f0f84e945d%22%2c%22Oid%22%3a%22d147ea6a-9288-4db2-9f47-d243d61e426a%22%7d

    Date:2023-06-08
    Time:12:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Endowing machines with abstract, flexible conceptual representations and the ability to combine known concepts to make novel, "conceptual-leaps" is a long-standing goal of artificial intelligence (AI). In pursuit of this goal, I will discuss my works on the foundations of concept learning for deep learning models. In particular I will focus on: multimodal learning (to ground concept representations more precisely into the world), quantifying robustness (to assess if atomic concepts are learnt correctly) and machine reasoning (to combine known atomic concepts into novel, emergent ones). Finally, I will speculate on important research directions to pursue for realizing the promise of general, robust and human interpretable AI systems.


    Bio:

    Ramakrishna Vedantam received the Ph.D. Degree in Computer Science from the Georgia Institute of Technology in 2018, before joining Facebook AI Research (FAIR) in New York. Currently, he is a visiting researcher at the New York University (NYU) center for data science (CDS). Rama's current research interests are around the foundations of robustness, multimodal learning and reasoning with large-scale deep learning models. At Georgia Tech, Rama's Ph.D. research was supported by the highly competitive Google Ph.D. fellowship. During his Ph.D. Rama also spent time at Google Research in Mountain View, Facebook AI Research in Menlo Park, Microsoft Research in Cambridge, UK and Ecole Centrale in Paris working on various topics at the intersection of probabilistic deep learning, multimodal learning, and reasoning. Rama developed the CIDEr metric popularly used in the AI community for evaluating vision and language models, and has published his research at various top-tier AI/ML venues such as ICML, NeurIPS, ICLR, CVPR, ICCV and EMNLP.

     



  • Semi-nonparametric Demand Estimation in the Presence of Unobserved Factors


    Speaker:

    Ashwin Venkataraman 

    Date:2023-05-29
    Time:15:10:00 (IST)
    Venue:SIT-001
    Abstract:

    Discrete choice models are commonly used to model customer demand because of their ability to capture substitution patterns in customer choices.
    Demand predictions from these models are then used as inputs in key operational decisions for firms such as what collection of products to show to customers or what prices to charge for different products in order to maximize revenues. In many applications of discrete choice modeling, there exist unobserved factors (UFs) driving the consumer demand that are not included in the model. Ignoring such UFs when fitting the choice model can produce biased parameter estimates, leading to poor demand predictions and suboptimal decisions. At the same time, accounting for UFs during estimation is challenging since we typically have only partial or indirect information about them. Existing approaches such as the classical BLP estimator (Berry et al. 1995) make strong parametric assumptions to deal with this challenge, and therefore can suffer from model misspecification issues when the assumptions are not met in practice.

    In this talk, I'll present a novel semi-nonparametric estimator for dealing with UFs in the widely used mixture of logit choice model that does not impose any parametric assumptions on the mixing distribution or the underlying mechanism generating the UFs. We theoretically characterize the benefit of using our estimator over the BLP estimator, and leverage the alternating minimization framework to design an efficient algorithm that implements our proposed estimator. Using a simulation study, we demonstrate that our estimator is robust to different ground-truth settings, whereas the performance of the BLP estimator suffers significantly under model misspecification. Finally, using real-world grocery sales data, we show that accounting for product and store-level UFs can significantly improve the accuracy of predicting weekly demand at an individual product and store level, with an avg. 57% improvement across 12 product categories over a state-of-the-art benchmark that ignores UFs during estimation.

    Joint work with Prof. Srikanth Jagabathula and Sandeep Chitla, both from NYU Stern School of Business.


    Bio:

    Ashwin Venkataraman is an assistant professor of operations management at the Naveen Jindal School of Management at the University of Texas at Dallas (UTD). His research interests lie at the intersection of machine learning, operations management, and marketing, with a focus on developing novel models and methodologies that can leverage the vast amounts of customer data that firms have access to nowadays. Prior to joining UTD, Ashwin received an MS and PhD in computer science from the Courant Institute of Mathematical Sciences at New York University, and his doctoral thesis won an honorable mention (joint-second place) in the 2019 INFORMS Dantzig Dissertation Award. Before joining graduate school, Ashwin completed a B.Tech in Computer Science and Engineering from IIT Delhi.



  • The Road Not Taken: Exploring Alias Analysis Based Optimizations Missed by the Compiler


    Speaker:

    Dr. Piyush Kedia

    Date:2023-04-21
    Time:03:00:00 (IST)
    Venue:#113, SIT Building
    Abstract:

    The alias analysis aims to answer whether two pointers can overlap during runtime. However, static alias analysis is imprecise. Because the alias analysis is used by many compiler optimizations, including loop transformations, the program's performance may suffer, especially in the presence of loops, due to the imprecision of alias analysis.
    In this talk, I'll present our tool, Scout, which can disambiguate two-pointers at runtime using single memory access. The key idea is to constrain the allocation size and alignment during memory allocations to enable fast disambiguation checks. Our technique enabled new opportunities for loop-invariant code motion, dead store elimination, loop vectorization, and load elimination in an already optimized code. Our performance improvements are up to 51.11% for Polybench benchmarks and up to 0.89% for SPEC benchmarks.


    Bio:

    Piyus Kedia is an assistant professor at IIIT Delhi. He received his Ph.D. from IIT Delhi. He works in the area of programming languages and systems security.



  • On The Membership Problem for Hypergeometric Sequences with Rational Parameters


    Speaker:

    Klara Nosan.

     

    Date:2023-04-19
    Time:03:00:00 (IST)
    Venue:#001, SIT Building
    Abstract:

    We investigate the Membership Problem for hypergeometric sequences: given a hypergeometric sequence ⟨u_n⟩ of rational numbers and a rational value t, decide whether t occurs in the sequence. We show decidability of this problem under the assumption that in the defining recurrence f(n) u_{n+1} = g(n) u_n, the roots of the polynomials f and g are all rational numbers. We further show the problem remains decidable if the splitting fields of the polynomials f and g are distinct or if f and g are monic polynomials that both split over a quadratic number field.

    Our proof relies on bounds on the density of primes in arithmetic progressions. We also observe a relationship between the decidability of the Membership problem (and variants) and the Rohrlich-Lang conjecture in transcendence theory.

    This talk is based on works done in collaboration with George Kenison, Amaury Pouly, Mahsa Shirmohammadi and James Worrell.


    Bio:

    https://www.irif.fr/~nosan/



  • Backdoor Attacks in Computer Vision: Challenges in Building Trustworthy Machine Learning Systems


    Speaker:

    Dr. Aniruddha Saha

    Date:2023-04-19
    Time:12:00:00 (IST)
    Venue:#001, SIT Building
    Abstract:

    Deep Neural Networks (DNNs) have become the standard building block in numerous machine learning applications. The widespread success of these networks has driven their deployment in sensitive domains like health care, finance, autonomous driving, and defense-related applications.

     

    However, DNNs are vulnerable to adversarial attacks. Research has shown that an adversary can tamper with the training process of a model by injecting misrepresentative data (poisons) into the training set. The manipulation is done in a way that the victim's model will malfunction only when a trigger modifies a test input. These are called backdoor attacks. For instance, a backdoored model in a self-driving car might work accurately for days before it suddenly fails to detect a pedestrian when the adversary decides to exploit the backdoor.

     

    In this talk, I will show ways in which state-of-the-art deep learning methods for computer vision are vulnerable to backdoor attacks and a few defense methods to remedy the vulnerabilities. Optimizing only for accuracy is not enough when we are developing machine learning systems for high stakes domains. Making machine learning systems trustworthy is our biggest challenge in the next few years.


    Bio:

    Aniruddha Saha is currently a Postdoctoral Associate with the Center for Machine Learning (CML) in the University of Maryland Institute for Advanced Computer Studies (UMIACS). He received his PhD in Computer Science from the University of Maryland, Baltimore County. His research interests include Computer Vision, Adversarial Robustness, Data Poisoning, Backdoor Attacks and Trustworthy Machine Learnin



  • Towards effective human-robot collaboration in shared autonomy systems


    Speaker:

    Raunak Bhattacharyya

    Date:2023-03-31
    Time:12:00:00 (IST)
    Venue:#501, Bharti Building
    Abstract:

    Automated agents have the potential to augment human capabilities in safety-critical applications such as driving, service and inspection, and smart manufacturing. As the field of robotics and AI is quickly emerging, one critical and challenging problem is ensuring that autonomous agents can collaborate and interact with humans. In this talk, I will present our work on how automated agents can model human decision making, plan around human operators, and explain their decisions. First, I will present an approach based on imitation learning to model real-world human behavior and demonstrate its application to model human driving trajectories. Second, I will present a hybrid data-driven and rule-based approach to generate novel scenarios which can be used for planning. Third, I will present ongoing work on explainable automated agents. Finally, I will discuss my future research plan centered around shared autonomous systems which includes optimally allocating authority between automated and human controllers, learning from imperfect demonstrations, metacognition for human-robot collaboration, and safe autonomous planning in the presence of humans.


    Bio:

    Dr. Raunak Bhattacharyya is a Postdoctoral Research Associate with the Oxford Robotics Institute, University of Oxford. His research focuses on human-autonomy interaction in shared autonomy systems. Raunak completed his Ph.D. at Stanford University, where he was a doctoral researcher in the Stanford Intelligent Systems Lab. He earned two Master's Degrees, in Computer Science and in Aerospace Engineering from Georgia Tech, and an undergraduate degree in Aerospace Engineering from IIT Bombay. Raunak received the Postdoctoral Enrichment Award from the Alan Turing Institute, UK, and the Graduate Research Award from the Transportation Research Board, USA.

     

    Online Link (MS Teams): https://teams.microsoft.com/l/meetup-join/19%3a859e0622905d4a7980e595706e31fa0d%40thread.tacv2/1679902599494?context=%7b%22Tid%22%3a%22624d5c4b-45c5-4122-8cd0-44f0f84e945d%22%2c%22Oid%22%3a%22d147ea6a-9288-4db2-9f47-d243d61e426a%22%7d



  • Security for the Internet of Things: Challenges and Prospects


    Speaker:

    Dr. Shantanu Pal, Assistant Professor, School of Information Technology, Deakin University, Melbourne, Australia

    Date:2023-03-29
    Time:12:00:00 (IST)
    Venue:Online (MS Teams Link)
    Abstract:

    This talk aims to report security mechanisms for large-scale Internet of Things (IoT) systems, in particular, the need for access control, identity management, delegation of access rights and the provision of trust within such systems. The talk will discuss the design and development of an access control architecture for the IoT. How the policy-based approach provides a fine-grained access for authorized users to services while protecting valuable resources from unauthorized access will be discussed in detail. The talk will also explore an identity-less, asynchronous and decentralized delegation model for the IoT leveraging the advantage of blockchain technology. This further calls for better designing the IoT infrastructures, optimizing human engagement, managing IoT identity, and advocating lightweight access control solutions in the broad context of IoT to create a fertile ground for research and innovation. This talk will also discuss various challenges, including the propagation of uncertainty in IoT networks and prospects of IoT access control mechanisms using emerging technologies, e.g., blockchain.


    Bio:

    Dr. Shantanu Pal is an Assistant Professor in the School of Information Technology, Deakin University, Melbourne, Australia. Shantanu holds a PhD in Computer Science from Macquarie University, Sydney, Australia. He was a Research Fellow at the Queensland University of Technology (QUT), Brisbane, Australia. He was also an associate researcher working with CSIRO's Data61, Australia. Shantanu's research interests are the Internet of Things (IoT), access control, blockchain technology, big data and distributed applications for cyber-physical systems, mobile and cloud computing, uncertainty propagation in IoT networks, emerging technologies, e.g., machine learning and artificial intelligence, etc. Shantanu is listed in the world's top 2% of scientists according to the recently released list by Stanford University, USA, in 2022 in Computer Networking and Communications.



  • Repeatedly Matching Items to Agents Fairly and Efficiently


    Speaker:

    Shivika Narang (PhD student at IISc)

    Date:2023-01-27
    Time:16:00:00 (IST)
    Venue:bharti-501
    Abstract:

    We consider a novel setting where a set of items are matched to the same set of agents repeatedly over multiple rounds. Each agent gets exactly one item per round, which brings interesting challenges to finding efficient and/or fair repeated matchings. A particular feature of our model is that the value of an agent for an item in some round depends on the number of rounds in which the item has been used by the agent in the past. We present a set of positive and negative results about the efficiency and fairness of repeated matchings. For example, when items are goods, a variation of the well-studied fairness notion of envy-freeness up to one good (EF1) can be satisfied under certain conditions. Furthermore, it is intractable to achieve fairness and (approximate) efficiency simultaneously, even though they are achievable separately. For mixed items, which can be goods for some agents and chores for others, we propose and study a new notion of fairness that we call swap envy-freeness (swapEF).

     
    Joint work with Prof Ioannis Caragiannis.
    https://arxiv.org/abs/2207.01589



    Bio:

    Shivika Narang is a PhD student, and recipient of the Tata Consultancy Services (TCS) Research Scholarship at the Indian Institute of Science, Bengaluru, where she is a member of the Game Theory Lab. She is being advised by Prof Y Narahari. She is broadly interested in Algorithmic Game Theory and Approximation Algorithms. Her current work is focused on fairness in matchings and allocations.



  • Fusing AI and Formal Methods for Automated Synthesis.


    Speaker:

    Priyanka Golia  (IITK & NUS)

    Date:2023-01-17
    Time:15:00:00 (IST)
    Venue:bharti-501
    Abstract:

    We entrust large parts of our daily lives to computer systems, which are becoming increasingly more complex. Developing scalable yet trustworthy techniques for designing and verifying such systems is an important problem. In this talk, our focus will be on automated synthesis,  a technique that uses formal specifications to automatically generate systems (such as functions, programs, or circuits) that provably satisfy the requirements of the specification.  I will introduce a state-of-the-art synthesis algorithm that leverages artificial intelligence to provide an initial guess for the system, and then uses formal methods to repair and verify the guess to synthesize probably correct system.  I will conclude by exploring the potential for combining AI and formal methods to address real-world scenarios.


    Bio:

    Priyanka Golia is a final year Ph.D. candidate at NUS, Singapore and IIT Kanpur.  Her research interests lie at the intersection of formal methods and artificial intelligence. In particular, her dissertation work has focused on designing scalable automated synthesis and testing techniques.  Her work has been awarded Best Paper Nomination at ICCAD-21 and Best Paper Candidate at DATE-23.  She was named one of the EECS Rising Stars in 2022. She has co-presented a tutorial on Automated Synthesis: Towards the Holy Grail of AI at AAAI-22 and IJCAI-22, and She is co-authoring an upcoming book (on invitation from NOW publishers) on functional synthesis.



  • Selection in the Presence of Biases


    Speaker:

    Prof. Nisheeth Vishnoi (Yale University)

    Date:2023-01-09
    Time:15:00:00 (IST)
    Venue:SIT-001
    Abstract:

    In selection processes such as hiring, promotion, and college
    admissions, biases toward socially-salient attributes of candidates
    are known to produce persistent inequality and reduce aggregate
    utility for the decision-maker. Interventions such as the Rooney Rule
    and its generalizations, which require the decision maker to select at
    least a specified number of individuals from each affected group, have
    been proposed to mitigate the adverse effects of such biases in
    selection.

    In this talk, I will discuss recent works which have established that
    such lower-bound constraints can be effective in improving aggregate
    utility.

    Papers:
    https://arxiv.org/abs/2001.08767
    https://arxiv.org/abs/2010.10992
    https://arxiv.org/abs/2202.01661


    Bio:

    Nisheeth Vishnoi is a professor of computer science and a co-founder of the Computation and Society Initiative at Yale University. His research focuses on foundational problems in computer science, machine learning, and optimization. He is also broadly interested in understanding and addressing some of the key questions that arise in nature and society from a computational viewpoint. Here, his current focus is on physics-inspired algorithms and algorithmic fairness. He is the author of two monographs and the book Algorithms for Convex Optimization.

    He was the recipient of the Best Paper Award at IEEE FOCS in 2005, the IBM Research Pat Goldberg Memorial Award in 2006, the Indian National Science Academy Young Scientist Award in 2011, the IIT Bombay Young Alumni Achievers Award in 2016, and the Best Technical Paper award at ACM FAT* in 2019. He was elected an ACM Fellow in 2019.



  • Towards Next-Generation ML/AI: Robustness, Optimization, Privacy.


    Speaker:

    Krishna Pillutla, visiting researcher (postdoc) at Google Research, USA

    Date:2023-01-06
    Time:12:00:00 (IST)
    Venue:bharti-501
    Abstract:

    Two trends have taken hold in machine learning and artificial intelligence: a move to massive, general-purpose, pre-trained models and a move to small, on-device models trained on distributed data. Both these disparate settings face some common challenges: a need for (a) robustness to deployment conditions that differ from training, (b) faster optimization, and (c) protection of data privacy.

    As a result of the former trend, large language models have displayed emergent capabilities they have not been trained for. Recent models such as GPT-3 have attained the ability to generate remarkably human-like long-form text. I will describe Mauve, a measure to quantify the goodness of this emergent capability. It measures the gap between the distribution of generated text and that of human-written text. Experimentally, Mauve correlates the strongest with human evaluations of the generated text and can quantify a number of its qualitative properties.

    The move to massively distributed on-device federated learning of models opens up new challenges due to the natural diversity of the underlying user data and the need to protect its privacy. I will discuss how to reframe the learning problem to make the model robust to natural distribution shifts arising from deployment on diverse users who do not conform to the population trends. I will describe a distributed optimization algorithm and show how to implement it with end-to-end differential privacy.

    To conclude, I will discuss my ongoing efforts and future plans to work toward the next generation of ML/AI by combining the best of both worlds with applications to differentially private language models and text generation to decentralized learning.


    Bio:

    Krishna Pillutla is a visiting researcher (postdoc) at Google Research, USA in the Federated Learning team. He obtained his Ph.D. at the University of Washington where he was advised by Zaid Harchaoui and Sham Kakade. Before that, he received his M.S. from Carnegie Mellon University and B.Tech. from IIT Bombay where he was advised by Nina Balcan and J. Saketha Nath respectively. Krishna's research has been recognized by a NeurIPS outstanding paper award (2021) and a JP Morgan Ph.D. fellowship (2019-20).




2022 talks

  • The present developments and future challenges of near-eye displays for augmented and virtual reality


    Speaker:

    Dr. Praneeth Chakravarthula, Research Scholar at Princeton University

    Date:2022-11-25
    Time:11:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Holography and eye tracking have become the holy grail of display technology with their combined promise of offering unprecedented capabilities in near-eye displays. In this talk, I will discuss the developments in holography and eye tracking, and the challenges to overcome before such near-eye displays can be practical.


    Bio:

    Praneeth is a research scholar at Princeton University working on end-to-end human and machine perception, with a special focus on computational cameras and displays. His research interests lie at the intersection of optics, perception, graphics, optimization, and machine learning. Praneeth obtained his Ph.D. from UNC Chapel Hill, under the advice of Prof. Henry Fuchs, on everyday-use eyeglasses style near-eye displays for virtual and augmented reality. His research has won several awards including recent best paper awards at SIGGRAPH 2022 and ISMAR 2022.



  • On Creative Visual Communication


    Speaker:

    Prof. Karan Singh from the University of Toronto

    Date:2022-11-14
    Time:11:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    This talk describes the fundamental human need for creative visual communication and discusses research on techniques that lie at the intersection of art, mathematics, interaction and AI. The talk will cover open challenges in the expression of shape and motion from sketch and audio-visual input, and its perception both in 2D and AR/VR, highlighting various interfaces such as cassie, crossshade+true2form, www.ilovesketch.com, www.meshmixer.com www.flatfab.com, colorsandbox.com, janusxr.org and jaliresearch.com that are creative solutions to problems in sketch and color modeling, AR/VR creation and facial animation.


    Bio:

    Karan Singh is a Professor in Computer Science (since 2002) at the University of Toronto. He holds several CS degrees: a BTech. (1991) from the Indian Institute of Technology Madras and MS (1992), PhD (1995) from the Ohio State University. His research interests lie in interactive graphics, spanning geometric and anatomic modeling, visual perception, character and facial animation, sketch/touch based interfaces and interaction techniques for Augmented and Virtual Reality (AR/VR). He has been a development lead on the technical Oscar (2003) winning modeling and animation system Maya, and co-founded multiple companies including sketch2 (acquired by MRI Software 2021), JanusXR, and JALI. He supervised the design and research of critically acclaimed research systems ILoveSketch, MeshMixer (acquired by Autodesk in 2011), Neobarok and FlatFab. Karan co-directs a globally reputed graphics and HCI lab, DGP, has over 150 peer-reviewed publications, and has supervised over 50 MS/PhD/ postdoctoral students. He was the R&D Director for the 2004 Oscar winning animated short film Ryan and has had an exhibition of electronic art titled Labyrinths. His research on audio-driven facial animation JALI, was used to animate speech for all characters in the AAA game Cyberpunk 2077, and his recent research on anatomically animated faces Animatomy, will be showcased in the upcoming film Avatar: The Way of Water.



  • Towards Autonomous Driving in Dense, Heterogeneous, and Unstructured Environments


    Speaker:

    Dr. Rohan Chandra,postdoctoral researcher at the University of Texas, Austin

    Date:2022-11-09
    Time:12:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    In this talk, I discuss many key problems in autonomous driving towards handling dense, heterogeneous, and unstructured traffic environments. Autonomous vehicles (AV) at present are restricted to operating on smooth and well-marked roads, in sparse traffic, and among well-behaved drivers. I present new techniques to perceive, predict, and navigate among human drivers in traffic that is significantly denser in terms of number of traffic-agents, more heterogeneous in terms of size and dynamic constraints of traffic agents, and where many drivers may not follow the traffic rules and have varying behaviors. My talk is structured along three themes—perception, driver behavior modeling, and planning. More specifically, I will talk about:

    1. Improved tracking and trajectory prediction algorithms for dense and heterogeneous traffic using a combination of computer vision and deep learning techniques.
    2. A novel behavior modeling approach using graph theory for characterizing human drivers as aggressive or conservative from their trajectories.
    3. Behavior-driven planning and navigation algorithms in mixed and unstructured traffic environments using game theory and risk-aware planning.

    Finally, I will conclude by discussing the future implications and broader applications of these ideas in the context of social robotics where robots are deployed in warehouses, restaurants, hospitals, and inside homes to assist human beings.


    Bio:

    Rohan Chandra is currently a postdoctoral researcher at the University of Texas, Austin, hosted by Dr. Joydeep Biswas. Rohan obtained his B.Tech from the Delhi Technological University, New Delhi in 2016 and completed his MS and PhD in 2018 and 2022 from the University of Maryland advised by Dr. Dinesh Manocha. His doctoral thesis focused on autonomous driving in dense, heterogeneous, and unstructured traffic environments. He is a UMD’20 Future Faculty Fellow, RSS’22 Pioneer, and a recipient of a UMD’20 summer research fellowship. He has published his work in top computer vision and robotics conferences (CVPR, ICRA, IROS) and has interned at NVIDIA in the autonomous driving team. He has served on the program committee of leading conferences in robotics, computer vision, artificial intelligence, and machine learning. He has given invited talks at academic seminars and workshops and has served on robotics panels alongside distinguished faculty. Outside of research, Rohan enjoys playing board games, teaching, and mentoring younger students. Webpage: http://rohanchandra30.github.io/.



  • Blending PDEs with Machine Learning for Mechanics


    Speaker:

    Somdatta Goswami is an Assistant Professor of Research in the Division of Applied Mathematics at Brown University

    Date:2022-11-03
    Time:17:00:00 (IST)
    Venue:Teams
    Abstract:

    A new paradigm in scientific research has been established with the integration of data-driven and physics-informed methodologies in the domain of deep learning, and it is certain to have an impact on all areas of science and engineering. This field, popularly termed scientific machine learning, relies on a known model, some (or no) high-fidelity data, and partially known constitutive relationships or closures, to be able to close the gap between the physical models and the observational data. Despite the fact that these strategies have been effective in many fields, they still face significant obstacles, such as the need for accurate and precise knowledge transmission in a data-restricted environment, and the investigation of data-driven methodologies in the century-old field of mechanics is still only in its infancy. The application of deep learning techniques within the context of functional and operator regression to resolve PDEs in mechanics will be the major focus of this talk. The approaches' extrapolation ability, accuracy, and computing efficiency in big and small data regimes, including transfer learning, would serve as indicators of their effectiveness.


    Bio:

    Somdatta Goswami is an Assistant Professor of Research in the Division of Applied Mathematics at Brown University. Her research is focused on the development of efficient scientific machine-learning algorithms for high-dimensional physics-based systems in the fields of computational mechanics and biomechanics. She joined Brown University as a postdoctoral research associate under the supervision of Prof. George Karniadakis in January 2021. Prior to that, she completed her Ph.D. at Bauhaus University in Germany under Prof. Timon Rabczuk's guidance. During this time, she worked on developing techniques to overcome the limitations of conventional numerical methods for phase field-based modeling of fracture with isogeometric analysis and machine learning approaches.



  • Model Counting meets Distinct Elements


    Speaker:

    Dr. Kuldeep Meel, NUS Presidential Young Professorship in NUS

    Date:2022-11-01
    Time:16:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Constraint satisfaction problems (CSP) and data stream models are two powerful abstractions to capture a wide variety of problems arising in different domains of computer science. Developments in the two communities have mostly occurred independently and with little interaction between them. In this work, we seek to investigate whether bridging the seeming communication gap between the two communities may pave the way to richer fundamental insights.

    In this talk, I will describe how our investigations lead us to observe striking similarity in the core techniques employed in the algorithmic frameworks that have evolved separately for model counting and Distinct Elements computation. We design a simple recipe for translation of algorithms developed for Distinct Elements to that of model counting, resulting in new algorithms for model counting. We then observe that algorithms in the context of distributed streaming can be transformed to distributed algorithms for model counting. We next turn our attention to viewing streaming from the lens of counting and show that framing Distinct Elements estimation as a special case of #DNF counting allows us to obtain a general recipe for a rich class of streaming problems, which had been subjected to case-specific analysis in prior works.

    (Joint work with A. Pavan, N. V. Vinodchandran, A. Bhattacharyya. The paper appeared at PODS 2021, was invited to ACM TODS as “Best of PODS 2021”, and received 2022 ACM SIGMOD Research Highlight Award and CACM Research Highlights).


    Bio:

    Kuldeep Meel holds the NUS Presidential Young Professorship in the School of Computing at the National University of Singapore (NUS). His research interests lie at the intersection of Formal Methods and Artificial Intelligence. He is a recipient of the 2022 ACP Early Career Researcher Award, the 2019 NRF Fellowship for AI and was named AI's 10 to Watch by IEEE Intelligent Systems in 2020. His research program's recent recognitions include the 2022 CACM Research Highlight Award, 2022 ACM SIGMOD Research Highlight, IJCAI-22 Early Career Spotlight, 2021 Amazon Research Award, "Best of PODS-21" invite from ACM TODS, "Best Papers of CAV-20" invite from FMSD journal, IJCAI-19 Sister conferences best paper award track invitation. Before joining NUS in 2017, he received M.S. and Ph.D. from Rice University, co-advised by Supratik Chakraborty and Moshe Y. Vardi. His thesis work received the 2018 Ralph Budd Award for Best Ph.D. Thesis in Engineering and the 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms, IBM Ph.D. Fellowship, and Best Student Paper Award at CP 2015. He graduated with Bachelor of Technology (with honors) in Computer Science and Engineering from IIT Bombay.



  • Designing for Local knowledge Towards Inclusive Development


    Speaker:
    Dr. Deepika Yadav
    Date:2022-10-26
    Time:12:00:00 (IST)
    Venue:Teams
    Abstract:
    Limited opportunities to learn, share and develop shared understandings of health are key impediments to sustainable and
    equitable growth of society. Decentring the process of knowledge construction to include local, domestic, cultural, and shared
    knowledge to complement expert-, clinical-, and individual-oriented ways has the potential to progress holistically towards achieving
    sustainable development goals, particularly in the areas which have lagged such as related to women and child health, reproductive health,
    and gender inequities. Examining these problems from the Human-Computer Interaction field perspective, I will present my
    research that focuses on designing and developing digital technologies for community and intimate health.
    I will first discuss findings from my research on improving learning opportunities for ASHAs who are lay village women working as
    community health workers in under-served regions of India. Taking an Action Research approach, my research responds to the training
    challenges and constraints in which ASHAs operate in rural India by involving healthcare workers and NGOs in the design and production of
    a novel interaction and communication channel that was successful in supporting the training of ASHAs. I proposed a low-cost Interactive
    Voice Response-based mobile training tool and through different field studies demonstrated the feasibility and acceptability of alternative
    peer-to-peer learning methods mediated through mobile technology. This includes providing training to 500+ ASHAs in Haryana and Delhi
    regions. Through my studies, I document the potential of the proposed tool to provide training and address queries that are culturally
    rooted and otherwise overlooked which have impacts on child and maternal health nationwide. I will then discuss my current research
    on intimate care that explores interaction design mechanisms to support self-discovery-based knowledge development and address
    interpersonal tensions in using digital intimate technologies in domestic and workplace settings.
    

    Bio:
    Deepika is a Digital Futures postdoctoral fellow at Stockholm University and KTH Royal University. Her research interests include
    human-computer interaction, information and communication and development, and computer-supported collaborative work. Her work
    involves designing and developing digital technologies for community health with a focus on women's health and intimate health challenges
    in low-resource and shared contexts. She obtained her Ph.D. from IIIT Delhi in 2021. She is also the winner of the Grand Challenge
    Explorations Award from the Bill & Melinda Gates Foundation which supported her Ph.D. research on community health workers in India.
    


  • Functional Synthesis - An Ideal Meeting Ground for Formal Methods and Machine Learning


    Speaker:

    Dr. Kuldeep Meel, NUS

    Date:2022-10-20
    Time:16:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Don't we all dream of the perfect assistant whom we can just tell what to do and the assistant can figure out how to accomplish the tasks? Formally, given a specification F(X,Y) over the set of input variables X and output variables Y, we want the assistant, aka functional synthesis engine, to design a function G such that F(X,G(X)) is true. Functional synthesis has been studied for over 150 years, dating back Boole in 1850's and yet scalability remains a core challenge. Motivated by progress in machine learning, we design a new algorithmic framework Manthan, which views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. The significant performance improvements call for interesting future work at the intersection of machine learning, constrained sampling, and automated reasoning.

    Relevant publications: CAV-20, IJCAI-21, and ICCAD-21 (Best Paper Award nomination)
    (Based on joint work with Priyanka Golia, Friedrich Slivovsky, and Subhajit Roy)


    Bio:

    Kuldeep Meel holds the NUS Presidential Young Professorship in the School of Computing at the National University of Singapore (NUS). His research interests lie at the intersection of Formal Methods and Artificial Intelligence. He is a recipient of the 2022 ACP Early Career Researcher Award, the 2019 NRF Fellowship for AI and was named AI's 10 to Watch by IEEE Intelligent Systems in 2020. His research program's recent recognitions include the 2022 CACM Research Highlight Award, 2022 ACM SIGMOD Research Highlight, IJCAI-22 Early Career Spotlight, 2021 Amazon Research Award, "Best of PODS-21" invite from ACM TODS, "Best Papers of CAV-20" invite from FMSD journal, IJCAI-19 Sister conferences best paper award track invitation. Before joining NUS in 2017, he received M.S. and Ph.D. from Rice University, co-advised by Supratik Chakraborty and Moshe Y. Vardi. His thesis work received the 2018 Ralph Budd Award for Best Ph.D. Thesis in Engineering and the 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms, IBM Ph.D. Fellowship, and Best Student Paper Award at CP 2015. He graduated with Bachelor of Technology (with honors) in Computer Science and Engineering from IIT Bombay.



  • Specification-Guided Reinforcement Learning


    Speaker:

    Dr. Suguman Bansal, Georgia Institute of Technology

    Date:2022-10-18
    Time:11:00:00 (IST)
    Venue:Teams
    Abstract:

    Reinforcement Learning (RL), especially when combined with neural networks (NN), has made remarkable strides in control synthesis in real-world domains, including challenging continuous (infinite-state) environments in robotics and game-playing. Yet, current RL approaches are poorly suited for control synthesis for long-horizon tasks for a number of reasons. First, typically control tasks in RL are specified in the form of rewards. Providing a suitable reward function for complex, long-horizon tasks can be daunting. Second, RL algorithms are inherently myopic, as they respond to immediate rewards, which may cause the algorithm to learn optimal policies for the short term but below-par policies in the long term. Lastly, the learned policies offer a poor degree of assurance: Neither are these policies interpretable and nor can they be verified against the desired task.
    In this talk, I will discuss our work on RL from logical specifications. Here the task is expressed in the form of temporal specifications as opposed to rewards. While the use of temporal specifications resolves the first issue, I will discuss our trials and triumphs in our quest to design RL algorithms that scale to long-horizon tasks and offer theoretical guarantees.     

     
    This talk is based on our NeurIPS 2021 paper and Invited Contribution to Henzinger-60.

    Bio:

    Suguman Bansal is an incoming Assistant Professor in the School of Computing at Georgia Institute of Technology, starting in January 2023. Her research is focused on formal methods and their applications to artificial intelligence, programming languages, and machine learning. Previously, she was an NSF/CRA Computing Innovation Postdoctoral Fellow at the University of Pennsylvania, mentored by Prof. Rajeev  Alur. She completed her Ph.D. at Rice University, advised by Prof. Moshe Y. Vardi.  She is the recipient of the 2020 NSF CI Fellowship and has been named a 2021 MIT EECS Rising Star. 



  • Fractional Stable Matching and Computational Social Choice


    Speaker:

    Dr. Sanjukta Roy, Postdoctoral scholar in College of IST at Penn State

    Date:2022-10-12
    Time:12:00:00 (IST)
    Venue:Teams
    Abstract:

    In Computational Social Choice, we are generally presented with problems where we deal with agents with preferences. A classical problem in this domain is the stable matching problem where the input is a bipartite graph G and every vertex (i.e., an agent) has a complete and strict preference list over the vertices from the other side. A stable matching is a mapping f: E(G) mapsto {​​​​0,1}​​​​ that satisfies some stability criteria.
    In the Fractional Stable Matching problem the range of f is [0,1] such that the sum of values assigned to the edges incident to a vertex is at most 1.  We study fractional matching with cardinal preferences. We consider more general stability notions:  the so-called ordinal stability and cardinal stability.
    We investigate the computational complexity of finding an ordinally stable or cardinally stable fractional matching which either maximizes the social welfare (i.e., the overall utilities of the agents) or the number of fully matched agents (i.e., agents whose matching values sum up to one). In this talk we will focus on the definitions and discuss our algorithmic findings. In the end, I will also discuss some of my other research interests.


    Bio:

    Dr. Sanjukta Roy, Postdoctoral scholar in College of IST at Penn State



  • The Multiple Facets of Human Navigation on the Web


    Speaker:

    Dr. Akhil Arora, EPFL

    Date:2022-10-06
    Time:11:00:00 (IST)
    Venue:Teams
    Abstract:

    Navigability, defined as the ability to effectively and efficiently maneuver from a source to a target node, is one of the most important functions of any network. Fueled by our curiosity and information seeking nature, we usually navigate a plethora of real-world networks including but not limited to the Web, online encyclopedic systems, news articles, and social networks. Consequently, the navigation patterns employed by humans provide tremendous insights into the way in which humans explore, browse, and interface with information on the Web, and thus, understanding and modeling human navigation behavior is an important task in the broad field of web data science and data mining.

     

    In this talk, we will look into three different facets of human navigation of networks, namely: (1) enablers, (2) characteristics, and (3) applications. We will first discuss the ability of entity linkers to enrich the link structure of information networks, thereby acting as an enabler of network navigation. We will also review Eigenthemes, a language and domain agnostic unsupervised method for entity linking. Next, we will review the characteristics of human navigation behavior in Wikipedia, which is the largest online encyclopedic system. Specifically, we will discuss how and under what scenarios do real human navigation patterns differ from synthetic patterns generated using standardized null models. Lastly, we will discuss an important application, that of operationalizing link insertion. To this end, we will showcase the utility of navigation patterns as a strong signal in identifying positions for inserting entities in Wikipedia articles.


    Bio:

    Akhil Arora is a PhD student advised by Prof. Robert West of the Data Science Lab (dlab) at EPFL and an external research collaborator of the Wikimedia Foundation. Prior to this, Akhil spent close to five years in the industry working with the research labs of Xerox and American Express as a Research Scientist. Even before that, he graduated from the Computer Science department of IIT Kanpur in June 2013, where he was advised by Prof. Arnab Bhattacharya. Akhil’s research interests include large-scale data management, graph mining, and machine learning. He is a recipient of the prestigious “EDIC Doctoral Fellowship” for the academic year 2018-19, and the Most Reproducible Paper Award at the ACM SIGMOD Conference, 2018. He has published his research in prestigious data management conferences, served as a program committee member, and co-organized workshops in these conferences. Additional details are available on his website: https://dlab.epfl.ch/people/aarora/.



  • Research at Computer Vision Lab at IIT Madras


    Speaker:

    Prof. Anurag Mittal, Professor, CSE, IITM

    Date:2022-05-05
    Time:12:00:00 (IST)
    Venue:Teams
    Abstract:

    In this talk, I will give an overview of the research activities at the Computer Vision lab, IIT Madras, with a focus on some of our exciting latest work in Video Understanding and Vision + language.  In particular, I will talk about the importance of fusing Geometry and Statistical Inference for building robust Computer Vision systems and our work in the multi-camera domain where integration of informaion from multiple cameras is required. Then, I will talk about our work in robust Feature Extraction, Representation and Matching that utilizes orders rather than raw intensities, as orders between pixels are much more

    robust to changes in the illumination, object color etc.  and lead to much more robust systems.  Next, I will talk about our work in shape representation and matching.  Shape is the most defining characteristic of most objects and leads to the most accurate object representations.  It is particularly useful in 3D object representation and matching. Finally, I will talk in somewhat more detail about two recent lines of work.  In the first, I will talk about our new work on a better representation for Videos using an attention mechanism that attends to the common regions across a video using the concepts of co-segmentation.  In the second line of work, I will talk about improving vision+language tasks such as image and video captioning and visual question answering by using better models, including those that mitigate database biases.


    Bio:

    Anurag Mittal is currently a professor in the Computer Science and Engg. dept. at IIT Madras heading the Computer Vision Lab (2006-present).  Prior to this, from 2002-2005, he was a Research Scientist at the Real-Time Vision and Modeling Department at Siemens Corporate Research, Princeton NJ.  He completed his PhD in Computer Science from the University of Maryland, College Park in Dec 2002.  Before that, he completed an MS from Cornell University in 2000 and a B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Delhi in 1997.

    His research interests are in all areas of Computer Vision, and he has published extensively on many varied topics such as Surveillance and Security, Feature Detection and Extraction, Robust Matching techniques, Shape Representation and Matching, zero-shot learning, Video Representations, Vision+Language and Image and Video Super-resolution.  He has 5000+ citations to his research publications.

    He is an active member of the Computer Vision community, and is an Associate Editor of CVIU since 2013.  He has also acted as an Area Chair for many major conferences in the area such as ICCV, CVPR, ECCV and ACCV.



  • Causal Machine Learning: A path to out-of-distribution generalization and fairness


    Speaker:

    Dr. Amit Sharma (MSR India)

    Date:2022-04-27
    Time:12:00:00 (IST)
    Venue:Teams
    Abstract:

    Current machine learning models face a number of challenges, including out-of-distribution generalization, fairness, privacy, robustness, and explainability. I will present causal machine learning as a new paradigm for learning models that aims to address all these challenges through a unifying formal framework. The key principle in causal machine learning is to express data-generating assumptions as a graph, which then helps to identify the correct regularization for different learning goals. Theoretically, we can show that a predictive model that uses only graph parents of the target variable is invariant to data distribution shifts, more robust to arbitrary interventions, and has better privacy guarantees than a standard ML model.

    I will describe applications of this principle in 1) obtaining state-of-the-art accuracy in the domain generalization task where the test data comes from different distribution than train; 2) detecting and fixing unfairness of machine learning systems.


    Bio:

    Amit Sharma is a Principal Researcher at Microsoft Research India. His work bridges causal inference techniques with machine learning, with the goal of making machine learning models generalize better, be explainable and avoid hidden biases. To this end, Amit has co-led the development of the open-source DoWhy library for causal inference and DiCE library for counterfactual explanations. The broader theme in his work is how machine learning can be used for better decision-making, especially in sensitive domains. In this direction, Amit collaborates with NIMHANS on mental health technology, including a recent app, MindNotes, that encourages people to break stigma and reach out to professionals. His work has received many awards including a Best Paper Award at ACM CHI 2021 conference, Best Paper Honorable Mention at ACM CSCW 2016 conference, 2012 Yahoo! Key Scientific Challenges Award and the 2009 Honda Young Engineer and Scientist Award. Amit received his Ph.D. in computer science from Cornell University and B.Tech. in Computer Science and Engineering from Indian Institute of Technology (IIT) Kharagpur.

    https://www.microsoft.com/en-us/research/people/amshar/



  • Data-Driven Ecosystem Migration: Migrating R from Lazy to Strict Semantics


    Speaker:

    Dr. Aviral Goel Ph.D., CS, Northeastern University

    Date:2022-04-21
    Time:12:00:00 (IST)
    Venue:SIT-001
    Abstract:
    Evolving mainstream language ecosystems is challenging because of large package
    repositories and millions of active users. Even a tiny change can break a big
    chunk of otherwise functional code at this scale, significantly impacting users
    and discouraging adoption. Partial adoption of changes leads to incompatible
    libraries causing fragmentation of the ecosystem. In this talk, I will propose a data-driven
    strategy to evolve a language at scale with minimal impact on its users. I will
    apply this strategy to evolve the R language ecosystem from lazy to strict semantics.

    Bio:
    Aviral Goel is a Computer Science Ph.D. candidate at Northeastern University. He
    has a B.Tech. in Electronics and Communication Engineering from NSUT, New Delhi.
    Before pursuing a Ph.D., he wrote software at Yahoo! and National Centre for
    Biological Sciences, Bengaluru. 
    He maintains a personal website at: http://aviral.io.


  • Modelling functional activity for brain state identification


    Speaker:

    Dr. Sukrit Gupta, Hasso Plattner Institute

    Date:2022-03-10
    Time:11:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Advances in neuroimaging techniques have made it possible to access intricate details of brain function and structure. Functional magnetic resonance imaging (MRI) gives us access to the brain's functional activations and aids in understanding its functional architecture during rest, neurological diseases, and varied task states. Data from functional MRI scans can be modelled as a network, known as the brain functional connectome, such that the network nodes represent brain regions and the edges between these nodes represent functional relationships between the regions. Studying the functional connectome has led to an improved understanding of cognitive and diseased brain states. In this presentation, I will discuss techniques that we proposed to uncover the functional alterations in the brain during different brain states.


    Bio:

    Dr. Sukrit Gupta is currently working as a research fellow with Hasso Plattner Institute, Berlin. Previously, he was working as a Research Scientist with the Deep Learning for Medical Imaging division with the Institute of Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore. He obtained his PhD in Computer Science from Nanyang Technological University, Singapore in 2020 where he worked in the areas of artificial intelligence (AI), neuroimaging and network science. His PhD thesis was nominated for the best thesis award at the School of Computer Science and Engineering at NTU, SIngapore. He pursued a bachelor’s in engineering (Computer Science) from Punjab Engineering College, Chandigarh, where he went for exchange programs to Carnegie Mellon University (US), and Ben Gurion University of the Negev (Israel). He was awarded with the Institute Color in recognition of his research activities during his UG program.



  • Coping with irrational identity issues using random ideals


    Speaker:

    Prof. Nikhil Balaji

    Date:2022-03-10
    Time:12:00:00 (IST)
    Venue:Teams
    Abstract:

    Identity testing is a fundamental problem in algorithmic algebra. In particular, identity testing in number fields has been much studied in relation to solving systems of polynomial equations, polynomial identity testing, and decision problems on matrix groups and semi groups, among many other problems. Among number fields, cyclotomic fields, i.e., those generated by roots of unity, play a particularly important role. I'll introduce the problem of identity testing integers in cyclotomic fields and present efficient algorithms for some special cases. As an application, we will see how cyclotomic identity testing can yield a simple parallel algorithm for testing equality of compressed strings.No background in number theory or computational complexity will be assumed for this talk.Based on joint work with Sylvain Perifel, Mahsa Shirmohammadi and James Worrell.



2020 talks

  • Communication complexity based approaches to formula lower bounds and lifting theorems


    Speaker:

    Dr Sajin Koroth, Simon Fraser University, Canada

    Date:2020-11-28
    Time:12:00:00 (IST)
    Venue:Microsoft teams
    Abstract:

    Despite the tremendous progress in the last couple of decades, many of the fundamental questions about the nature of computation remain open. This lack of progress has been explained by limitations of well-known techniques that drove progress in the past. The main focus of complexity theory in recent times has been the search for tools and techniques that avoid such barriers. This talk is motivated by the following (seemingly different) fundamental questions that remain wide open:

        • Are there natural problems that are poly time solvable but cannot be solved efficiently by parallel computation?
        • What connections exist between optimality in weak and strong models of computation?
    We make progress on both the problems using connections and tools from communication complexity and information theory that avoid the known barriers. The first question is central problem in complexity theory and computational problems with ``efficient'' parallel algorithms are an important computational class in the theory and practice of algorithms. Most of the linear algebra can be done using efficient parallel algorithms, yet a fundamental problem like linear programming (solvable in poly-time) is not known to have efficient parallel algorithms. We make progress on the first question, known as the $P neq NC^1$ problem, by studying a related conjecture called KRW conjecture (proposed by Karchmer, Raz, and Wigderson in the '90s).
    KRW conjecture reduces the $P neq NC^1$ problem to studying a fundamental operation of Boolean functions called block-composition in the framework of communication complexity. We prove a variant of the KRW conjecture and open up a path to solve $P neq NC^1$ via a restricted version of the KRW conjecture. Block-composition is also a central operation that underlies a fundamental approach to the second question, known as "lifting theorems". Lifting theorems, try to lift lower bounds from a weaker model of computation (usually decision trees) to a stronger model of computation (usually communication complexity). Lifting theorems have been instrumental in resolving long-standing open problems in many areas of theoretical computer science like communication complexity, circuit complexity, proof complexity, linear programming, game theory, etc. We make progress on this problem by proving new lifting theorems that work for a large class of Boolean functions. Because of the efficiency and generality of our lifting theorems, our results have found applications in many different areas including circuit complexity, proof complexity, quantum communication complexity, and query complexity. The challenges involved in studying these problems in the framework of communication are fundamentally different, and looking forward, we suggest new approaches and tools.


    Bio:

    Sajin Koroth is a postdoctoral researcher at Simon Fraser University. His research interests are in Complexity Theory, specifically, in circuit complexity and communication complexity and the interplay between these two seemingly different areas. Earlier he was a postdoctoral fellow at the University of Haifa. During this time he attended the Simons program on Lower bounds in Computational Complexity at the University of California, Berkeley as a visiting postdoc. He obtained his Ph.D. from the Indian Institute of Technology, Madras under the guidance of Jayalal Sarma.



  • Internet (Anti-)Censorship Privacy, Anonymity and the Whole Nine Yards


    Speaker:

    Sambuddho Chakravarty, Asst. Prof. , IIIT Delhi

    Date:2020-11-20
    Time:12:00:00 (IST)
    Venue:Microsoft teams
    Abstract:

    In the second decade of the century (circa the Arab Springs of 2011), the Internet is the new battlefield where wars between politicians, media, (h)activists, lawyers and the military, shape the destiny of millions of people. Historically incepted as the ARPANET,  it was engineered to serve as means of communication, even in the face of calamities and wars. Political will often plays antithetical to this very attribute.  For instance, countries like China,  Iran and UAE use (homebrewed) firewalling infrastructure to censor web traffic -- sometimes with the pretext of preserving cultural and religious values, at other times to prevent political dissent. No wonder a large body of network censorship measurements focuses on these two countries. While such countries are inherently (constitutionally) undemocratic, ``free speech'' over the Internet is, in recent years, being regularly suppressed even in democracies like India. Such evolutions are positioned on concerns otherwise paramount to the preservation of human rights -- e.g., policing child pornography. But state control of communication channels has been abuse to silence dissent, even in India where the supreme court deems freedom of speech on the Internet a fundamental right.

    In this context, it is natural to ask how free and open the Internet is and how robust it is to censorship by countries like India, that in the recent years has evolved a sophisticated censorship infrastructure.

    In this talk I present our work over the years that has focussed on evolution of Indian’s Internet censorship infrastructure, how it censors traffic (and now apps.), how various ISPs implement it. Further, I also present some research efforts to evade censorship (and also Internet shutdowns/blackouts).

    To begin with we consider the question of whether India might potentially follow the Chinese model and institute a single, government-controlled filter. Our research shows that would not be difficult, as the Indian Internet is quite centralized already. A few ``key'' ASes (~ 1% of Indian ASes) collectively intercept approximately 95% of paths to the censored sites we sample in our study, and also to all publicly-visible DNS servers. About 5000 routers spanning these key ASes would suffice to carry out IP or DNS filtering for the entire country; approximately 75% of these routers belong to only two private ISPs.  Thereafter we conducted an extensive study (first of its kind) involving nine major ISPs of the country in terms of what kind of censorship techniques they use, what triggers them, their consistency and coverage, and how to evade them. Our results indicate a clear disparity among the ISPs, on how widely they install censorship infrastructure. We also compare our findings against those that are obtained using the Open Observatory of Network Interference (OONI) tool, the latter surprisingly has several false positives and negatives.  As of now we are looking into how Indian ISPs have evolved their censorship strategies to filter TLS based traffic, along with implementing the latest mandate to block mobile apps.

    While existing solutions to evade censorship include proxies, VPNs, Tor have been designed primarily for web, while other applications like VoIP (real-time voice) are mostly ignored. As a part of our research we have extensively explored the feasibility of transporting real-time voice (mostly UDP) over Tor (that primarily supports TCP). Prior research deemed Tor to be unsuitable for such purposes. In our research we tried to identify how the interplay of network attributes (delay, jitter, bandwidth etc.) impact performance of VoIP. To our surprise the belief established from prior research seems unfounded.

    However, all such solutions that rely on proxies are prone to being filtered by the ISPs, as these end-points are easily discoverable. Futuristic solutions like Decoy Routing, that rely on routers that could double as “smart proxies”, are resilient to such filtering. They have hitherto relied mostly on commodity servers, and involve wide scale traffic observation, inadvertently posing a threat to the privacy of  users who do not require such services. To that end, we devise a SDN based DR solution, SiegeBreaker, that not only performs at line rates (comparable to native TCP) but also does not require inspection of all network flows, thus preserving the privacy of oblivious users. However, the deployability of such solutions remains a challenge, as it requires support from major top-tier ISPs.

    A third alternative, combining the best of both the above solutions, involves tunnelling Interent traffic over that of various (semi-)real time applications, e.g. Instant Messaging (IM). To that end, we designed and tested a scheme, Camoufler, that utilizes IM channels textit{as-is} for transporting traffic. The scheme provides unobservability and good QoS, due to its inherent properties, such as low-latency message transports. Moreover, unlike Decoy Routing, it does not pose new deployment challenges. Performance evaluation of Camoufler, implemented on five popular IM apps indicate that it provides sufficient QoS for web browsing. E.g., the median time to render the homepages of Alexa top-1k sites was recorded to be about 3.6s, when using Camoufler implemented over Signal.


    Bio:

    Sambuddho Chakravarty works as an Asst Prof. at the Department of  Computer Science and Engineering Department of Indraprastha Institute of Information Technology (IIIT Delhi) since June 2014. He completed his PhD in Computer Science from Columbia University, New York, where he worked at the Network Security Lab (NSL) and was advised by Prof. Angelos D. Keromytis. His research is broadly related to network security and more specifically related to network anti-censorship, counter surveillance, network anonymity and privacy (and all problems revolving around such systems e.g. network measurements, infrastructure etc.). He heads a small research lab at IIIT Delhi that involves ten students (mostly PhDs and B.tech students) and collaborates actively with other networks and systems security researchers in India and abroad.



  • Fairness in Unsupervised Machine Learning: From Political Philosophy to Data Science Algorithms


    Speaker:

    Dr. Deepak Padmanabhan, Queen's Univ. Belfast (UK)

    Date:2020-02-28
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Data science has long considered learning better models and making better decisions based on existing data. With data science systems penetrating into every aspect of daily life, whether it be navigation systems that influence which route you pick for your commute or personalized news delivery systems that decide what kind of news you would like to read, it is increasingly critical to consider questions on fairness. For example, would you be more prone to pre-emptive checks at the hands of an AI system if you were of a different gender, race or are a native of a different region?. There are various kinds of biases that data driven systems can internalize based on the data they train over, the assumptions they use, and the optimization that they model. Two prominent doctrines are those of disparate treatment and disparate impact, and have been subject to much study in the context of classification systems. The scenario of unsupervised learning poses a deeper challenge, where the detection of biases is often trickier given the absence of labels in the data. This talk will introduce doctrines of fairness from political philosophy, and cover streams of research on incorporating notions of fairness within retrieval and clustering tasks, including a recent work by the speaker on fairness in clustering. This will also briefly outline some fairness research directions which may be of interest for future data science research.


    Bio:

    Dr. Deepak Padmanabhan holds a faculty position in Computer Science at Queen's University Belfast (UK) and is an adjunct faculty member at IIT Madras. He received his B.Tech from CUSAT and his M.Tech and PhD from Indian Institute of Technology Madras, all in Computer Science. His current research interests include data analytics, natural language processing, machine learning, similarity search and information retrieval. Deepak has published over 70 research papers across major venues in information and knowledge management. Before, he was a researcher at IBM Research - India for ten years. His work has led to twelve patents from the USPTO. A Senior Member of the IEEE and the ACM and a Fellow of the UK Higher Education Authority, he is the recipient of the INAE Young Engineer Award 2015, an award recognizing scientific work by researchers across engineering disciplines in India. He has also authored multiple books on various areas of data science. He may be reached at deepaksp@acm.org (preferred) or d.padmanabhan@qub.ac.uk



  • Drones as On-Demand Infrastructure for Next-Generation Wireless Networks


    Speaker:

    Dr. Ayon Chakraborty, Researcher at NEC Laboratories in Princeton, New Jersey

    Date:2020-02-26
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Ubiquitous connectivity and sensing are among the grand challenge problems for next generation wireless systems, 5G and beyond. However, limitations exist in terms of resource management or infrastructure deployment (e.g., cell towers, IoT sensors etc) that hinders the realization of true ubiquity in various contexts. Often these deployments need to be on-demand, quick and temporary in order to avoid over-provisioning and maintenance costs. Recent advances in unmanned aerial vehicle (UAV) technology or drones have the potential to change the landscape of wide-area wireless connectivity and sensing by bringing a new dimension – "mobility" to the network infrastructure itself. I will demonstrate how drones can add a great value to state-of-the-art communication and sensing systems, particularly when such infrastructure are compromised (natural disasters), in-accessible (security threats), sparingly present or non-existent (rural areas).

    Our aim is to design a robust networked system that helps in effective communication and coordination among people (e.g., emergency responders) in such scenarios. In the first part of the talk, I will discuss how drones can be used as "flying" cellular base stations providing connectivity to clients on the ground. We will see how the position of the drone in 3D aerial space is critical in determining the overall connectivity to the clients located on the ground. Given the drone's limited flight time, we explore how to quickly search for a near-optimal spot for the drone to operate at, optimizing the capacity of the wireless radio access network (RAN). In the second part, we will learn how the drones can also be used to localize such mobile clients in GPS-denied environments without access to any pre-deployed localization infrastructure. Overall, our contributions enhance the 5G's vision for ubiquitous connectivity by providing on-demand support for connectivity and sensing in challenged environments.


    Bio:

    Ayon Chakraborty works as a researcher in the Mobile Communications and Networking group at NEC Laboratories America located in Princeton, New Jersey since 2017, after finishing his PhD in Computer Science from Stony Brook University, New York. He is broadly interested in designing mobile systems that interact with and interpret the physical world, spanning both algorithm design as well as system prototyping. He has held several internship positions at Bell Laboratories, Hewlett Packard Laboratories and Huawei Technologies leading to longer term collaborative efforts. He has regularly published in reputed venues for systems and networking including Infocom, NSDI, Mobicom, CoNext etc and was nominated for the best paper award at ACM Sigcomm IMC 2014. In 2011, Ayon finished his B.E. in Computer Science and Engineering from Jadavpur University, Kolkata where he was awarded the department gold medal/best project award. He is also a recipient of the DAAD WISE fellowship in 2010.



  • So you want to do a Startup?


    Speaker:

    Dinesh Chahlia, Google

    Date:2020-02-25
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    India is a thriving startup ecosystem. Everyone around you is making it big as an entrepreneur. You are tempted to do something big as well but don't know where to start. You are wondering whether you should do the startup now or take that amazing offer with an established company and wait till you achieve financial security. You are not sure what is left to disrupt anymore. Should you do something with Artificial Intelligence? How about systems infrastructure? Is it possible to solve real world problems and still make big money? I would love to share my thoughts on the above.


    Bio:

    Dinesh is an alumni of the CSE Deptt at IIT Delhi (2002 batch).

    Dinesh comes with an illustrious career spanning 18 years in the software industry. He began his career working with startups developing novel and disruptive solutions in India and later moved on to work with tech giants such as Microsoft, Netflix and Google.

    He is currently in India, advising early stage startups as an Advisor in Residence for a Google initiative called Google for Startups. The goal of the program is to level the playing field for startups across the world by connecting them with Google experts, entrepreneurs and advisors across the world. He has chatted with over 30 startups in India at various stages over the last few weeks and is very excited to be working with 6-8 out of those at a deeper engagement level.

    Dinesh is an engineering leader at Google Cloud, spearheading several highly critical and top priority initiatives for the Google Cloud Platform. Previously, he has led various projects from ideation to realization in the areas of security intrusion detection, recommendation engines and fraud detection at Microsoft and Netflix. He was the Head of Development for Auptyma, India where he led the development of a low overhead,cross-platform JVM profiler that was acquired by Oracle in 2007.

    Dinesh emphasizes on fostering the culture and processes that enable happy and productive teams. He has a passion for cloud computing and applying artificial intelligence to solve real-world problems. He is active in the startup community in the Seattle area, where he is helping budding entrepreneurs validate ideas and create execution strategies.



  • Privacy from the prying eyes: Protecting privacy of social data from external and internal actors


    Speaker:

    Prof. Mainack Mondal, IIT Kgp

    Date:2020-02-21
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Today millions of users are uploading billions of pieces of (often personal) text, pictures or videos via large-scale networked systems (e.g., online social media sites and cloud storage platforms). As a result, designing and building private and secure access-management systems for this online data has become a widespread and an extremely important research problem.

    To that end, our research aims to design and build data-driven usable, private and secure systems to assist users in managing access to their online data. In this talk, I will focus on online social media sites (OSMs) like Facebook and Twitter, which are used by millions of users and plagued by frequent well-publicized privacy and security issues. In OSMs our work considers protecting privacy of social data against actors who are external to social media platform as well as internal actors. I will first cover our work on protecting privacy from a set of external actors – large-scale third-party data aggregators like Spokeo who crawl and aggregate large-scale data from unsuspecting social media users. Specifically, I will present Genie -- a system we built to protect privacy of social data by limiting large-scale data aggregators. Next, I will move to our recent work on protecting privacy of old archival social data from a set of internal actors -- unwanted OSM users. Today the social media platforms often force users to choose a privacy setting while uploading data. However, changes in users' lives and relationships, as well as social media platforms themselves, can cause mismatches between a post's active privacy setting and the desired setting. To that end, I will present how we attacked this problem through a combination of a user study and the development of automated inference to identify potentially mismatched privacy settings. Finally, I will touch upon our work on "deletion privacy"; This research direction deals with understanding the content removal practices in social media, privacy concerns associated with it and protecting privacy of the deleted content. I will conclude this talk with a brief overview of my current research agenda and ongoing work on building usable, secure and private access-management systems for online data.


    Bio:

    Dr. Mainack Mondal is an assistant professor of Computer Science at IIT Kharagpur. He completed his Ph.D. from Max Planck Institute for Software Systems (MPI-SWS), Germany in 2017. Prior to joining IIT Kharagpur he was a postdoctoral researcher in University of Chicago and Cornell Tech.

    Mainack is broadly interested about incorporating human factors in security and privacy, and consequently designing usable online services. Specifically, he works on developing systems which provides usable privacy and security mechanisms to online users while minimizing system abuse. His work has led to papers in ACM's CCS, PETS, AAAI's ICWSM, Usenix's SOUPS, ACM's CSCW, ACM's CoNExt and Usenix's EuroSys among others. His work also received distinguished paper award in Usenix's SOUPS.



  • Towards automated debugging and optimization


    Speaker:

    Dr. Abhilash Jindal, Co-founder and CTO of Mobile Enerlytics, San Francisco, CA

    Date:2020-02-20
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Debugging and optimization are largely ad-hoc manual processes taking up 35-75 percent of programmers' time costing more than 0B annually. This process becomes further exacerbated in the modern programming paradigm where programmers stand on the "shoulder of giant" software frameworks to quickly program complex systems but have a limited understanding of the intricacies of the underlying system.

    In the first part of this talk, we will see how new power management APIs on smartphones seriously encumber programming. These APIs, when not used correctly, give rise to a whole new class of energy bugs called sleep disorder bugs. These bugs plague the complete smartphone software stack including apps, framework, kernel, and device drivers. I will present a taxonomy of sleep disorder bugs with precise bug definitions which enabled us to create a suite of automated tools to identify different flavors of sleep disorder bugs. I will then present an automated static analysis tool KLOCK that identified 63 sleep-induced time bugs, a subclass of sleep disorder bugs, in the Android Linux kernel.

    In the second part of the talk, we will study how the traditional profilers fall short in giving actionable diagnosis for optimizing programs. As even after being presented with performance hotspots, developers still need significant manual inspection and reasoning of the source code to proceed with the remaining optimization tasks: 1. Is there a more efficient implementation? 2. How to come up with a more efficient implementation? I will present a new optimization methodology called differential profiling that automates answering these two questions. In particular, differential profiling automatically uncovers more efficient API usage by leveraging existing implementations of similar apps which are bountiful in the app marketplace. Our differential energy profiling tool, DIFFPROF employs a novel tree-matching algorithm for comparing energy profiler outputs of pairs of similar apps and found 12 energy improvements in popular Android apps.


    Bio:

    Abhilash Jindal received B.Tech. in Electrical Engineering from IIT Kanpur. He received his Ph.D. in Computer Science from Purdue University where he researched software-based approaches for extending mobile battery life. He has published papers in top system conferences including OSDI, ATC, Eurosys, Mobisys, Sigmetrics, and Mobicom. Currently, he is commercializing his research work serving as the CTO of Mobile Enerlytics, a silicon valley startup. His research interests include mobile systems, software engineering, and operating systems.



  • Social media-based epidemic intelligence


    Speaker:

    Dr. Aditya Joshi is a Postdoctoral Fellow at CSIRO, the national science agency of Australia.

    Date:2020-02-06
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Epidemic intelligence is the use of technology to detect and manage outbreaks of diseases. In this talk, I will present our work on early detection of epidemics using social media posts. This work is divided into two parts: (a) health mention classification (the use of natural language processing to detect illness reports in a social media post), and (b) health event detection (the use of time series monitoring to detect epidemics from a social media stream). For health mention classification, we propose a linguistically motivated deep learning-based architecture. For health event detection, we experiment with an asthma outbreak in Melbourne in 2016. I will end the talk with a discussion on opportunities for social media-based epidemic intelligence in India: the second mobile phone market and a richly multilingual country with a peculiar social media usage pattern.


    Bio:

    Aditya Joshi is a Postdoctoral Fellow at CSIRO, the national science agency of Australia. He holds a joint PhD degree from IIT Bombay and Monash University, Australia. He has worked for a database MNC, an artificial intelligence startup and a government research organisation, each of which provided a valuable perspective. His teaching talks span diverse forums: NLP schools organised by IITB and IIIT-H; a tutorial at EMNLP, a leading NLP conference; and a TEDx talk. His research was covered in media stories in the Indian Express, the Times of London and the Australian. He received the best PhD thesis award from the IITB-Monash Research Academy.



  • A Wakeup Call: Databases in an Untrusted Universe


    Speaker:

    Prof. Amr El Abbadi, University of California, Santa Barbara (UCSB)

    Date:2020-01-31
    Time:15:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Once upon a time databases were structured, one size fit all and they resided on machines that were trustworthy and even when they failed, they simply crashed. This era has come and gone
    as eloquently stated by Mike Stonebraker. We now have key-value stores, graph databases, text databases, and a myriad of unstructured data repositories. However, we, as a database community still cling to our 20th-century belief that databases always reside on trustworthy, honest servers. This notion has been challenged and abandoned by many other Computer Science communities, most notably the security and the distributed systems communities. The rise of the cloud computing paradigm, as well as the rapid popularity of blockchains, demand a rethinking of our naïve, comfortable beliefs in an ideal benign infrastructure. In the cloud, clients store their sensitive data in remote servers owned and operated by cloud providers. The Security and Crypto Communities have made significant inroads to protect both data and access privacy from malicious untrusted storage providers using encryption and oblivious data stores. The Distributed Systems and the Systems Communities have developed consensus protocols to ensure the fault-tolerant maintenance of data residing on untrusted, malicious infrastructure. However, these solutions face significant scalability and performance challenges when incorporated in large scale data repositories. Novel database designs need to directly address the natural tension between performance, fault-tolerance, and trustworthiness. This is a perfect setting for the database community to lead and guide. In this talk, I will discuss the state of the art in terms of data management in malicious, untrusted settings, its limitations, and potential approaches to mitigate these shortcomings. As examples, I will use cloud and distributed databases that reside on untrustworthy malicious infrastructure and discuss specific approaches for standard database problems like commitment and replication. I will also explore blockchains, which can be viewed as asset management databases in untrusted infrastructures.


    Bio:

    Amr El Abbadi is a Professor of Computer Science at the University of California, Santa Barbara. He received his B. Eng. from Alexandria University, Egypt, and his Ph.D. from Cornell University. His research interests are in the fields of fault-tolerant distributed systems and databases, focusing recently on Cloud data management and blockchain-based systems. Prof. El Abbadi is an ACM Fellow, AAAS Fellow, and IEEE Fellow. He was Chair of the Computer Science Department at UCSB from 2007 to 2011. He has served as a journal editor for several database journals, including, The VLDB Journal, IEEE Transactions on Computers and The Computer Journal. He has been Program Chair for multiple database and distributed systems conferences. He currently serves on the executive committee of the IEEE Technical Committee on Data Engineering (TCDE) and was a board member of the VLDB Endowment from 2002 to 2008. In 2007, Prof. El Abbadi received the UCSB Senate Outstanding Mentorship Award for his excellence in mentoring graduate students. In 2013, his student, Sudipto Das received the SIGMOD Jim Gray Doctoral Dissertation Award. Prof. El Abbadi is also a co-recipient of the Test of Time Award at EDBT/ICDT 2015. He has published over 300 articles in databases and
    distributed systems and has supervised over 35 Ph.D. students.



  • Fairness in Algorithmic Decision Making


    Speaker:

    Dr. Abhijnan Chakraborty is a Post-doctoral Researcher at the Max Planck Institute for Software Systems (MPI-SWS), Germany.

    Date:2020-01-28
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking job applicants), judiciary (offender profiling), healthcare (identifying high-risk patients who need additional care) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have uncovered the potential for bias (unfairness) in algorithmic decisions in different contexts, and proposed mechanisms to control (mitigate) such biases. However, the emphasis of existing works has largely been on fairness in supervised classification or regression tasks, and fairness issues in other scenarios remain relatively unexplored. In this talk, I will cover our recent works on incorporating fairness in recommendation and matching algorithms in multi-sided platforms, where the algorithms need to fairly consider the preferences of multiple stakeholders. I will discuss the notions of fairness in these contexts and propose techniques to achieve them. I will conclude the talk with a list of open questions and directions for future work.


    Bio:

    Abhijnan Chakraborty is a Post-doctoral Researcher at the Max Planck Institute for Software Systems (MPI-SWS), Germany. He obtained his PhD from the Indian Institute of Technology (IIT) Kharagpur under the supervision of Prof. Niloy Ganguly (IIT Kharagpur) and Prof. Krishna Gummadi (MPI-SWS). During PhD, he was awarded the Google India PhD Fellowship and the Prime Minister's Fellowship for Doctoral Research. Prior to joining PhD, he spent two years at Microsoft Research India, working in the area of mobile systems. His current research interests fall under the broad theme of Computing and Society, covering the research areas of Social Computing, Information Retrieval and Fairness in Machine Learning. He has authored several papers in top-tier computer science conferences including WWW, KDD, AAAI, CSCW, ICWSM and MobiCom. His research works have won the best paper award at ASONAM'16 and best poster award at ECIR’19. He is one of the recipients of internationally competitive research grant from the Data Transparency Lab to advance his research on fairness and transparency in algorithmic systems. More details about him can be found at

    https://people.mpi-sws.org/~achakrab



  • Scalable algorithms for rapidly advancing DNA sequencing technologies


    Speaker:

    Dr. Chirag Jain is currently a postdoctoral fellow with Dr. Adam Phillippy in the Genome Informatics Section at the National Institutes of Health (NIH), USA.

    Date:2020-01-27
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Genomics continues to have an immense impact in life sciences, from elucidating fundamental genetic mechanisms to providing new keys to understand diseases. Catalyzed by breakthroughs in genomic technologies, high-throughput DNA sequencing has become a major generator of data, catching up with the most prominent big-data
    applications such as astronomy and social media. As a result, it has become challenging to design scalable DNA sequence algorithms with suitable accuracy guarantees.


    Bio:

    Dr. Chirag Jain is currently a postdoctoral fellow with Dr. Adam Phillippy in the Genome Informatics Section at the National Institutes of Health (NIH), USA. He got his doctorate in Computational Science at Georgia Tech under the supervision of Prof. Srinivas Aluru, and bachelors in Computer Science from IIT Delhi.  He is a recipient of the Best GPU
    Paper Award at IEEE HiPC’14, Best Poster awards at RECOMB’19, CRIDC’19, IITD-OpenHouse’14, and Reproducibility-Initiative Award at ACM Supercomputing’16.



  • Security and Privacy of Connected Autonomous Vehicles


    Speaker:

    Dr. Vireshwar Kumar, postdoc at Purdue University

    Date:2020-01-22
    Time:00:00:00 (IST)
    Venue:Bharti-101
    Abstract:

    The upcoming smart transportation systems which consist of connected autonomous vehicles, are poised to transform our everyday life. The sustainability and growth of these systems to their full potential will significantly depend on the robustness of these systems against security and privacy threats. Unfortunately, the communication protocols employed in these systems lack mainstream network security capabilities due to energy constraints of the deployed platforms and bandwidth constraints of the communication medium. In this talk, I will present the results of my efforts in anatomizing the two vital communication protocols employed in the smart transportation: (1) vehicle-to-everything (V2X) communication protocol which is utilized to facilitate wireless communication among connected vehicles, and (2) controller area network (CAN) protocol which is utilized within an autonomous vehicle to enable real-time control of critical automotive components including brakes. For each of these two protocols, I will first describe the inquisitive approach which led to the discovery of the new security vulnerabilities. Then, through the experiments on real-world systems, I will demonstrate how these vulnerabilities can be exploited to launch malicious attacks which evade the state-of-the-art defense mechanisms employed in these systems. I will conclude the talk by discussing novel countermeasures which are required to mitigate these fundamental vulnerabilities and prevent their exploitation.


    Bio:

    Dr. Vireshwar Kumar is a Postdoctoral Research Associate in the Department of Computer Science at Purdue University. Vireshwar earned his B.Tech. in Electrical Engineering at Indian Institute of Technology Delhi in 2009, and Ph.D. degree in Computer Engineering at Virginia Tech in 2016. He was the recipient of the outstanding Ph.D. student award by the Center for Embedded Systems for Critical Applications at Virginia Tech. He also had a short stint as a Project Assistant in the Department of Electrical Communication Engineering at Indian Institute of Science in 2010. His research interests include discovering and mitigating security vulnerabilities in the communication protocols employed in cyber-physical systems, e.g., smart home, smart
    transportation and smart city. Vireshwar’s research work has featured in top-tier security venues including ACM Conference on Computer and Communications Security (CCS) and IEEE Transactions on Information Forensics and Security (TIFS). He has also served on the TPC of flagship conferences including IEEE Conference on Communications and Network Security (CNS) and IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN).



  • Redesigning how networks work to make the net"work"


    Speaker:

    Dr. Praveen Tammana, Postdoctoral researcher at Princeton University

    Date:2020-01-16
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Solving network-management problems in real-time becomes increasingly difficult with a human in the middle and ever-growing demands of modern applications for high performance, high availability, and high security. An alternative for making network management easier while meeting the demands is to work on an ambitious goal of self-managing networks that are able to manage themselves with minimal human involvement.

    Practically deployable self-managing networks heavily rely on fine-grained telemetry data to understand what is going on in the network (visibility) and then make appropriate network management decisions.  However, it is extremely challenging and technically expensive to build a telemetry system that provides necessary visibility; mainly because of the large resource overhead in monitoring and collecting the telemetry data from massive networks that can have up to millions of links and thousands of high-speed switches. Today, due to limited resources, network infrastructure offers poor visibility, as a consequence, operators have to do a lot of manual work or a lot of estimations, which oftentimes lead to inaccurate decisions.

    In this talk, I will highlight a scalable network telemetry system, SwitchPointer, which obtains high visibility by making architectural changes across the network entities (e.g., switches, servers) in a large-scale data center. The main challenge is how to effectively use limited resources at switches and obtain high visibility. I will discuss in detail how SwitchPointer addresses this challenge and enables taking fast and accurate management decisions.


    Bio:

    Praveen Tammana is currently a Postdoctoral researcher at Princeton University. His research interests are in Systems and Networking. He develops scalable, fast, and easy-to-use real-world networked systems. His recent work focuses on two aspects of network management: network telemetry and traffic engineering. Prior to Princeton, he received his Ph.D. from The University of Edinburgh, UK, and his Master's degree from IIT-Madras, India. He has over three years of industrial experience working with Intel technology and Juniper Networks, at Bangalore, and Cisco Systems, San Jose, USA.



  • Text is not Text: Challenges in deep text understanding in professional domains


    Speaker:

    Vijay Saraswat, Global Head of AI R&D, Goldman Sachs

    Date:2020-01-14
    Time:16:30:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Thanks to Big Data, Compute, Code (and, surprisingly, People), NLU research has entered a golden period. Hitherto, much of the impetus for this work has come from the desire to computationally understand "mass content" -- content from the web, social media, news sources. Here, relatively shallow meaning extraction techniques have worked reasonably well, without needing to use linguistically motivated, deep, constrained-based NL systems such as LFG and HPSG.

    Significant challenges arise, however, when one starts to work with text in professional domains (e.g., financial or legal). Here documents such as regulations, contracts, agreements (e.g., loan, credit, master service), financial prospectuses, company and analyst reports must be addressed.

    A contract (e.g. commercial line of credit) may involve multiple agreements with (typically, overriding) amendments, negotiated over many years. References are used at multiple semantic levels, and written using genre-specific conventions (e.g., |Casualty Event pursuant to Section 2.05(b)(ii)(B)|, |the meaning specified in Section 5.14|). Documents (e.g. EU regulations) may contain editing instructions that specify amendments to previous documents by referencing their clauses and supplying (quoted) replacement text.

    Such documents typically contain highly complex text (very low Flesch scores), with single sentences spreading over multiple paragraphs, with named sub-clauses. They may use specific conventions (e.g. parenthetical structures) to present parallel semantic propositions in a syntactically compact way. They may use technical terms, whose meaning is well-understood by professionals, but may not be available in formalized background theories. Moreover, documents typically contain definitional scopes so that the same term can have various meanings across documents.

    Further, documents usually define hypothetical or potential typical events (e.g. |events of default|), rather than actual (or fake) concrete events (e.g. |Barack Obama was born in Kenya|); text may be deontic, not factual. Text may specify complex normative definitions, while carving out a series of nested exceptions. It may include sophisticated argumentation structures (e.g. about company valuations) that capture critical application-specific distinctions. Ironically, in some cases we see rather contorted text (e.g. defining contingent interest rates) which is essentially a verbalization of mathematical formulas.

    In short: professional text has an enormously rich structure, refined over centuries of human business interactions. This structure is distant from the news (WSJ, NYT) corpora used for "broad domain" NL research, and even the "English as a Formal Language" approach of traditional linguists.

    We outline a long-term research agenda to computationalize such documents. We think of language processors as compilers, operating on the input document at varying levels of abstraction (abstract syntax tree, intermediate representation) and using a variety of techniques (partial evaluation, abstract interpretation) to generating meaning representations (rather than object code), intended primarily for use with back-end reasoners. We hypothesize the need to extend the highly sophisticated, large-parameter pattern matching characteristic of today's deep learning systems with linguistically rigorous analyses, leveraging logical representations.


    Bio:

    Vijay Saraswat graduated from IIT Kanpur in 1982 with a B Tech in Electrical Engineering, and from Carnegie-Mellon University in 1989 with a PhD in Computer Science. Over thirty years, he has been a Member of the Research Staff at Xerox PARC, a Technology Consultant at AT&T Research, and a Distinguished Research Staff Member and Chief Scientist at IBM TJ Watson Research Center.

    Vijay's research interests span a number of areas in Computer Science, across AI, logic and programming systems. He is particularly known for his work on concurrent constraint programming, (with Mary Dalrymple and colleagues) on "glue semantics", and on the X10 programming language for high performance computation. His work has won numerous awards.

    Vijay joined Goldman Sachs in 2017 to help found corporate R&D. He currently leads the AI R&D work at GS, with team members in New York, Bangalore, Frankfurt, Hong Kong and other locations. The team is focused on natural language understanding, knowledge extraction, representation and reasoning. The team is looking to establish relationships with key academic and industrial partners, with engagements geared towards creation of public data-sets and associated publications.



  • Attention and its (mis)interpretation


    Speaker:

    Danish Pruthi, CMU

    Date:2020-01-14
    Time:14:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful for explaining why a model makes its decisions to stakeholders. In the talk, I will discuss a simple technique for training models to produce deceptive attention masks, calling into question the use of attention for interpretation. Further the talk will characterize the cost of deception (in terms of accuracy), and shed light into alternative mechanisms that the models use to cheat.


    Bio:

    Danish Pruthi is a Ph.D. student at School of Computer Science in Carnegie Mellon University. Broadly, his research aims to enable machines to understand and explain natural language phenomenon. He completed his bachelors degree in computer science from BITS Pilani, Pilani in 2015. He has also spent time doing research at Facebook AI Research, Microsoft Research, Google, and Indian Institute of Science. He is a recipient of the Siebel Scholarship, and CMU Presidential Fellowship.



  • Models for Human-AI Decision Making


    Speaker:

    Prof. Ambuj Singh, University of California, Santa Barbara (UCSB)

    Date:2020-01-14
    Time:15:30:00 (IST)
    Venue:Bharti Building #101
    Abstract:

    Teams of the future will likely consist of humans and AI agents. To this purpose, we conducted experiments to explore how such teams integrate their individual decisions into a group response. We propose a set of models to explain team decision making using appraisal dynamics, Prospect Theory, Bayes rule, and eigenvector centrality. The first two models encode how individuals in a group appraise one other's expertise and the risk-rewards of their opinions. The second two models encode how groups integrate their members' preferences to form a group decision. Decision-making in the experiments proceeds consists of two sequential tasks: a first task in which the teams decide to report one of the presented options to a multiple-choice question or to choose one of the agents. If the teams decide to use an agent, the second decision-making task consists of integrating the agent's response with their previous responses and reporting an answer. Our findings suggest that the proposed models explain decision making in teams. Furthermore, the teams exceed the performance of the best human member most of the time and the AI agents in the first decision task but not so in the second decision task.  


    Bio:

    Ambuj K. Singh is a Professor of Computer Science at the University of California, Santa Barbara, with part-time appointments in the Biomolecular Science and Engineering Program and the Technology Management Program. He received a B.Tech. degree from the Indian Institute of Technology, Kharagpur, and a Ph.D. degree from the University of Texas at Austin. His research interests are broadly in the areas of network science, machine learning, and bioinformatics. He has led a number of multidisciplinary projects including UCSB's Interdisciplinary Graduate Education Research and Training (IGERT) program on Network Science funded by the National Science Foundation (NSF). He is currently directing a Multidisciplinary University Research Initiative (MURI) on the Network Science of Teams. He has graduated over 25 Ph.D. students over his career.



  • A Regularization view of Dropout in Neural Networks


    Speaker:

    Ambar (PHD)

    Date:2020-01-13
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Dropout is a popular training technique used to improve the performance of Neural Networks. However, a complete understanding of the theoretical underpinnings behind this success remains elusive. In this talk, we will take a regularization view of explaining the empirically observed properties of Dropout. In the first part, we will investigate the case of a single layer linear neural network with Dropout applied to the hidden layer, and observe how the Dropout algorithm can be seen as an instance of Gradient Descent applied to a changing objective. Then we will understand how training with Dropout can be seen to be equivalent to adding a regularizer to the original network. With these tools we would be able to show that Dropout is equivalent to a nuclear-norm regularized problem, where the nuclear-norm is taken on the product of the weight matrices of the network. Inspired by the success of Dropout, several variants have been proposed recently in the community. In the second part of the talk, we will analyze some of these variants (DropBlock and DropConnect), and obtain theoretical reasons for their success over vanilla Dropout. Finally, we will end with a unified theory to analyze Dropout variants, and understand some of the implications.


    Bio:

    Ambar is a PhD student in the Computer Science Department at the Johns Hopkins University. He is advised by René Vidal, and affiliated with the Mathematical Institute for Data Science and the Vision Lab at JHU. Previously he has obtained his Bachelor's degree in Computer Science from IIIT Delhi. His current research interest lies in the theory of deep learning, specifically, to theoretically understand the properties induced by common deep learning techniques on the optimization of deep architectures. He is currently working on understanding the regularization properties induced by common tricks used in training DNNs. His other interests include understanding adversarial examples generated for computer vision systems. Ambar is MINDS Data Science Fellow and his research is also supported by the IARPA DIVA and the NSF Optimization for Deep Learning grants.



  • Top Ten Challenges to Realizing Artificial Intelligence (AI)


    Speaker:

    Dr. Ravi Kothari, Ashoka U.

    Date:2020-01-13
    Time:16:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The many successes of deep-learning notwithstanding, there remain fundamental impediments to realizing true Artificial Intelligence (AI). In this widely accessible talk, we discuss the top ten challenges to realizing true AI and highlight some of the ongoing efforts towards resolving these challenges.


    Bio:

    Dr. Ravi Kothari started his professional career as an Assistant Professor in the Department of Electrical and Computer Engineering of the University of Cincinnati, OH, USA where he later became a tenured Associate Professor and Director of the Artificial Neural Systems Laboratory. After about 10 years in academia, he joined IBM Research and held several positions over the years including that of Chief Scientist of IBM Research, India and the Global Chief Architect of the IBM-Airtel outsourcing account where he introduced the first-ever stream based processing of telecom data (Airtel is one of the world's largest full service telecom providers and the Chief Architect is responsible for the IT strategy, design and realization across more than 15 countries). He also became IBM's first Distinguished Engineer from India. After about 15 years in IBM, he joined Accenture for a short stint as a Fellow, Technical Labs prior to returning to academia as a Professor of Computer Science at Ashoka University.

    Dr. Kothari's expertise lies in Machine Learning, Pattern Recognition, AI, Data Mining, Big Data and other data-driven initiatives. His present research focuses on multiple aspects of Artificial Intelligence including "creative" machines.

    Dr. Kothari has served as an Associate Editor of the IEEE Transactions on Neural Networks, IEEE Transactions on Knowledge and Data Engineering, Pattern Analysis and Applications (Springer) as well as on the program committees of various conferences. He has been an IEEE Distinguished Visitor for many years, a member of the IBM Academy of Technology and the IBM Industry Academy. He was a recipient of the Gerstner Award (IBM's highest team award), the Best of IBM award (IBM's highest individual award) and is a fellow of the Indian National Academy of Engineering.



  • Parameterized Complexity of Network Design Problems


    Speaker:

    Dr. Pranabendu Misra, postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany

    Date:2020-01-06
    Time:12:00:00 (IST)
    Venue:Bharti-501
    Abstract:

    Network Design Problems, which concern designing minimum cost networks that satisfy given set of ``connectivity constrains'', are very well studied in computer science and combinatorial optimization. Almost all these problems are NP-hard, and a number of results are known about them in the realm of approximation algorithms. Parameterized Complexity is a different framework for dealing with computational intractability, where typically we try to design fast algorithms to solve the problem on those instances which admit a ``small cost solution''. In this talk we will look at some recent results on the parameterized complexity of network design problems, and future directions for research.


    Bio:
    Pranabendu Misra is a postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany.
    Before that, he was a researcher at the Department of Informatics, University of Bergen, Norway.
    He obtained his PhD in Computer Science from the Institute of Mathematical Sciences, Chennai, India, advised by Saket Saurabh.
    He did his undergraduate studies at the Chennai Mathematical Institute in Mathematics and Computer Science. 
    His primary research interests lie in graph theory and algorithms focusing on Parameterized Complexity. 

    He has also worked on problems in approximation algorithms, matroids, fault tolerant graphs, matching under preferences, de-randomization.



  • Fault-Tolerant Reachability and Strong-connectivity Structures


    Speaker:

    Dr. Pranabendu Misra, postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany

    Date:2020-01-06
    Time:12:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Network Design Problems, which concern designing minimum cost networks that satisfy given set of ``connectivity constrains'', are very well studied in computer science and combinatorial optimization. Almost all these problems are NP-hard, and a number of results are known about them in the realm of approximation algorithms. Parameterized Complexity is a different framework for dealing with computational intractability, where typically we try to design fast algorithms to solve the problem on those instances which admit a ``small cost solution''. In this talk we will look at some recent results on the parameterized complexity of network design problems, and future directions for research.


    Bio:

    Pranabendu Misra is a postdoctoral fellow at the Max Planck Institute for Informatics, Saarbrucken, Germany.
    Before that, he was a researcher at the Department of Informatics, University of Bergen, Norway.
    He obtained his PhD in Computer Science from the Institute of Mathematical Sciences, Chennai, India, advised by Saket Saurabh.
    He did his undergraduate studies at the Chennai Mathematical Institute in Mathematics and Computer Science.
    His primary research interests lie in graph theory and algorithms focusing on Parameterized Complexity.
    He has also worked on problems in approximation algorithms, matroids, fault tolerant graphs, matching under preferences, de-randomization.



  • Congruence Closure Algorithms for Uninterpreted and Interpreted Symbols


    Speaker:

    Prof. Deepak Kapur

    Date:2020-01-02
    Time:15:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    Algorithms for computing congruence closure and conditional congruence closure of conditional ground equations over uninterpreted and interpreted symbols are presented. The algorithms are based on a recently developed framework for computing (conditional) congruence closure by abstracting nonflat terms by constants as proposed first in Kapur's congruence closure  algorithm (RTA97). This framework is general and flexible. It is used also to develop (conditional) congruence closure algorithms for cases when associative-commutative function symbols can have additional properties including idempotency, nilpotency and/or have identities, as well as their various combination. It also works for Horn properties including extensionality of function symbols. The concept of a Horn closure of ground equations with uninterpreted as well as interpreted symbols is proposed to decide Horn equations directly.


    Bio:

    Professor, Department of Computer Science The University of New Mexico, Albuquerque, NM USA



  • Novel and Efficient Techniques for Guaranteeing Routing and Protocol Level Deadlock Freedom in Interconnection Networks


    Speaker:

    Mayank Parasar (Georgia Tech)

    Date:2020-01-02
    Time:12:00:00 (IST)
    Venue:SIT Building #001
    Abstract:

    Interconnection networks are the communication backbone for any system. They occur at various scales: from on-chip networks between processing cores, to supercomputers between compute nodes, to data centers between high-end servers. One of the most fundamental challenges in an interconnection network is that of deadlocks. Deadlocks can be of two types: routing level deadlocks and protocol level deadlocks. Routing level deadlocks occur because of cyclic dependency between packets trying to acquire buffers, whereas protocol level deadlock occurs because the response message is stuck indefinitely behind the queue of request messages. Both kinds of deadlock render the forward movement of packets impossible leading to complete system failure.

    Prior work either restricts the path that packets take in the network or provisions an extra set of buffers to resolve routing level deadlocks. For protocol level deadlocks, separate sets of buffers are reserved at every router for each message class. Naturally, proposed solutions either restrict the packet movement resulting in lower performance or require higher area and power.

    In my work, I propose a new set of efficient techniques for providing both routing and protocol level deadlock freedom. My techniques provide periodic forced movement to packets in the network, which breaks any cyclic dependency of packets. Breaking this cyclic dependency results in resolving routing level deadlocks. Moreover, because of periodic forced movement, the response message is never stuck indefinitely behind the queue of request messages; therefore, my techniques also resolve protocol level deadlocks.


    Bio:

    Mayank Parasar is a fifth-year Ph.D. candidate in the School of Electrical and Computer Engineering at Georgia Institute of Technology. He received an M.S. in Electrical and Computer Engineering from Georgia Tech in 2017 and a B.Tech. in Electrical Engineering department from Indian Institute of Technology (IIT) Kharagpur in 2013.

    He works in computer architecture with the research focus on proposing breakthrough solutions in the field of interconnection networks, memory system and system software/application layer co-design. His dissertation, titled Novel and Efficient Techniques for Guaranteeing Routing and Protocol Level Deadlock Freedom in Interconnection Networks, formulates techniques that guarantee deadlock freedom with a significant reduction in both area and power budget.

    He held the position of AMD Student Ambassador at Georgia Tech in the year 2018-19. He received the Otto & Jenny Krauss Fellow award in the year 2015-16.



  • Groebner Bases: Universality, Parametricity and Canonicity


    Speaker:

    Prof. Deepak Kapur,

    Date:2020-01-01
    Time:11:00:00 (IST)
    Venue:Bharti Building #501
    Abstract:

    The talk will integrate the concepts of a universal Groebner basis which serves as a Groebner basis for all admissible term orderings, with parametric (more popularly called comprehensive) Groebner basis which serves as a Groebner basis for all possible specializations of parameters. Three different but related approaches will be presented. First one extends Kapur's algorithm for computing a parametric Groebner basis in which along with branching based on making the head coefficient nonzero or not, branching on ordering constraints is also done in order first to choose a term that could serve as the head term. The second one is based on the Kapur, Sun and Wang's algorithm for computing comprehensive Groebner basis and system but uses a reduced universal Groebner basis to generate a universal parametric Groebner basis. The third one starts with a reduced Groebner basis using one arbitrary ordering and then generate a universal comprehensive Groebner basis by incrementally changing the orderings along with partitioning specializations. The result of these algorithm is a mega Groebner basis that works for every admissible ordering as well as for any specialization of parameters.


    Bio:

    Professor, Department of Computer Science The University of New Mexico, Albuquerque, NM USA



Copyright © 2024 Department of Computer Science and Engineering. All Rights Reserved.