Technology Management and Information Systems Seminars
Date | Speaker | Affiliation | Presentation | Room |
---|---|---|---|---|
6.2.24 | Erez Shmueli | Tel Aviv University |
Continuous monitoring and early detection of health and well-being related events using smartphones and smartwatches
In this presentation, I'll introduce the PerMed (Personalized Medicine) study, aiming to enhance early diagnosis of infectious respiratory diseases by integrating electronic medical records with behavioral data from smartphones and smartwatches. I'll discuss the study's motivation, goals, and design, and share key results from substudies, including predicting COVID-19 infections, evaluating vaccine safety, and assessing the impact of wars on individuals and sub-populations.
|
Lorry Lokey building, first floor meeting room (MAMAD). |
20.2.24 |
Oren Bar-Gill |
Harvard Law School |
Algorithmic Harm in Consumer Markets
Machine learning algorithms are increasingly able to predict what goods and services particular people will buy, and at what price. It is possible to imagine a situation in which relatively uniform, or coarsely set, prices and product characteristics are replaced by far more in the way of individualization. Companies might, for example, offer people shirts and shoes that are particularly suited to their situations, that fit with their particular tastes, and that have prices that fit their personal valuations. In many cases, the use of algorithms promises to increase efficiency and to promote social welfare; it might also promote fair distribution. But when consumers suffer from an absence of information or from behavioral biases, algorithms can cause serious harm. Companies might, for example, exploit such biases in order to lead people to purchase products that have little or no value for them or to pay too much for products that do have value for them. Algorithmic harm, understood as the exploitation of an absence of information or of behavioral biases, can disproportionately affect members of identifiable groups, including women and people of color. Since algorithms exacerbate the harm caused to imperfectly informed and imperfectly rational consumers, their increasing use provides fresh support for existing efforts to reduce information and rationality deficits, especially through optimally designed disclosure mandates. In addition, there is a more particular need for algorithm-centered policy responses. Specifically, algorithmic transparency—transparency about the nature, uses, and consequences of algorithms—is both crucial and challenging; novel methods designed to open the algorithmic “black box” and “interpret” the algorithm’s decision-making process should play a key role. In appropriate cases, regulators should also police the design and implementation of algorithms, with a particular emphasis on exploitation of an absence of information or of behavioral biases.
|
Lorry Lokey building, first floor meeting room (MAMAD). |
12.3.24 |
Tali Ziv |
Data Science Manager at Meta |
Data Science and Experimentation at Meta
This seminar meeting will be a Q&A session with Tali, a behavioral economist and data scientist with vast industry experience. As the title suggests, Tali will talk about data science research and large scale experimentation taking place at Meta today. Please come prepared with questions for Tali to make this a truly engaging conversation!
|
Lorry Lokey building, first floor meeting room (MAMAD). |
9.4.24 | Nir Grinberg | Ben-Gurion University |
Supersharers of Fake News on Twitter
Governments may have the capacity to flood social media with fake news, but little is known about the use of flooding by ordinary voters. Here, we identify 2,107 registered U.S. voters that account for 80% of fake news shared on Twitter during the 2020 U.S. presidential election by an entire panel of 664,391 voters. We find that supersharers are important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers have a significant over-representation of women, older adults, and registered Republicans. Supersharers’ massive volume does not seem automated but is rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many.
|
Lorry Lokey building, first floor meeting room (MAMAD). |
16.4.24 | Imry Kissos | AWS |
Adapting language model architectures for time series forecasting
Time series forecasting is essential for decision making across industries such as
retail, energy, finance, and healthcare. However, developing accurate machine-learning-based forecasting models has traditionally required substantial dataset-specific tuning and model customization. In a paper we have just posted to arXiv, we present Chronos, a family of pretrained time series models based on language model architectures. Like large language models or vision-language models, Chronos is a foundation model, which learns from large datasets how to produce general representations useful for a wide range of tasks. The use of pretrained models for time series forecasting is an exciting frontier. By reformulating the forecasting task as a kind of language modeling, Chronos demonstrates a simpler path to general and accurate prediction. Moreover, Chronos will be able to seamlessly integrate future advances in the design of LLMs. We invite researchers and practitioners to engage with Chronos, now available open-source, and join us in developing the next generation of time series models |
Lorry Lokey building, first floor meeting room (MAMAD). |
21.5.24 | Ariel Goldstein | Hebrew University |
Exploring the Cognitive Boundaries and Problem-Solving Capacities of Large Language Models: Simulating Human Interactions and Solving Complex Tasks
The first paper investigates the challenges faced
by LLMs in accurately simulating human interactions, with a specific focus on political debates. Despite recent advancements, LLMs, as complex statistical learners, often exhibit unexpected behaviors due to their inherent social biases. Our study highlights these limitations by demonstrating how LLM agents tend to conform to their embedded biases, deviating from established social dynamics when simulating political debates. Using an automatic self-fine-tuning method, we manipulated the biases within the LLM, showing that agents realign with the altered biases. These findings underscore the critical need for further research to develop methods that mitigate these biases, aiming to create more realistic and accurate simulations. The second paper explores the problem-solving capabilities of LLMs by evaluating their performance on stumpers—unique single-step intuition problems that are challenging for humans but easily verifiable. We compared the performance of four state-of-the-art LLMs (Davinci-2, Davinci-3, GPT-3.5-Turbo, GPT-4) with that of human participants. Our findings reveal that new-generation LLMs excel in solving stumpers, surpassing human performance. However, humans demonstrate superior skills in verifying the solutions to these problems. This research provides valuable insights into the cognitive abilities of LLMs and suggests potential enhancements for their problem-solving capabilities across various domains. Together, these papers contribute to our understanding of LLMs' strengths and limitations, offering critical perspectives for future research in AI and the development of more robust and realistic computational simulations. |
Lorry Lokey building, first floor meeting room (MAMAD). |
25.6.24 | Tsvi Kuflik | Haifa University |
Fairness, explainability and in‑between: Understanding the impact of different explanation methods on non‑expert users’ perceptions of fairness toward an algorithmic system
In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users’ perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users’ perceptions of fairness and understanding of the system’s outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users’ fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users’ understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users’ perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to pro¬mote users’ fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new ¬explanation style that can be used as the link between the actual (computational) fairness of the system and users’ fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.
|
|
2.7.24 | Daphne Raban | Haifa University |
Policy shaping the impact of open-access publications: a longitudinal assessment
This study investigated the longitudinal impact of Open-Access (OA) publication in Israel, a country which has not yet adopted a formal OA policy. We analyzed bibliometric indicators of Israeli researchers across all academic disciplines, focusing on OA publications published in journals and repositories from 2010 to 2020. Data extracted from Scopus reveal a consistent “OA citation advantage” (OACA) throughout the study period, suggesting the influence of OA publication on citation rates beyond time and scientific novelty. Despite the highest number of publications in the green route, steadily increasing over the years, and a recent rise in gold route publications, the hybrid route demonstrates a significantly higher citation advantage, highlighting an “OA subtype citation effect”. Furthermore, our study uncovers a “funding effect” on OA grant-funded publications, indicating a doubled likelihood of publishing in OA when research is funded, contingent on the funder’s OA policy. The findings offer comprehensive insights into OA publishing trends in Israel, serving as a case study for assessing the impact of OA policy. The study underscores the importance of both funder-specific OA policies and broader initiatives by the global scientific community and intergovernmental organizations to promote OA publishing and address potential disparities in research dissemination. Efforts to combat the “rich get richer” effect can foster equitable access to scientific knowledge.
|
Past Seminars: