Technology Management and Information Systems Seminars
| Date | Speaker | Affiliation | Presentation | Room |
|---|---|---|---|---|
| Nov. 11 |
Prof. Avi Seidmann |
Boston University |
Seeing Is Believing: Performance Benchmarking Outperforms Technical Explanations in AI-Assisted Decision-Making under Uncertainty
Despite artificial intelligence's proven superiority in many business decisions, managers still routinely reject AI recommendations (a phenomenon often termed algorithm aversion), leading to billions in foregone value annually. Recent research suggests that making AI systems more interpretable could increase adoption; however, little empirical work has examined whether and how interpretability affects AI adoption in high-stakes, uncertain business environments where decisions are repetitive and outcomes are inherently unpredictable, as is frequently the case when dealing with financial markets. Through two pre-registered, incentive-compatible experiments (N=1,213), we challenge the prevailing assumption that increased interpretability universally promotes AI adoption.
|
Recanati building, room 303. |
| Nov. 18 |
Etzion Harari |
TAU |
“Multi-View Graph Feature Propagation for Privacy Preservation and Feature Sparsity”
Graph Neural Networks (GNNs) have demonstrated remarkable success in node classification tasks over relational data, yet their effectiveness often depends on the availability of complete node features. In many real-world scenarios, however, feature matrices are highly sparse or contain sensitive information, leading to degraded performance and increased privacy risks. Furthermore, direct exposure of information can result in unintended data leakage, enabling adversaries to infer sensitive information. To address these challenges, we propose a novel Multi-view Feature Propagation (MFP) framework that enhances node classification under feature sparsity while promoting privacy preservation. MFP extends traditional Feature Propagation (FP) by dividing the available features into multiple Gaussian-noised views, each propagating information independently through the graph topology. The aggregated representations yield expressive and robust node embeddings. This framework is novel in two respects: it introduces a mechanism that improves robustness under extreme sparsity, and it provides a principled way to balance utility with privacy. Extensive experiments conducted on graph datasets demonstrate that MFP outperforms state-of-the-art baselines in node classification while substantially reducing privacy leakage. Moreover, our analysis demonstrates that propagated outputs serve as alternative imputations rather than reconstructions of the original features, preserving utility without compromising privacy. A comprehensive sensitivity analysis further confirms the stability and practical applicability of MFP across diverse scenarios. Overall, MFP provides an effective and privacy-aware framework for graph learning in domains characterized by missing or sensitive features.
|
Recanati building, room 303. |
| Nov. 18 |
Itzik Ziv |
TAU |
The Impact of LLM-Generated Reviews on Recommender Systems: Textual Shifts, Performance Effects, and Strategic Platform Control
The rise of generative AI technologies is reshaping content-based recommender systems (RSes), which increasingly encounter AI-generated content alongside human-authored content. This study examines how the introduction of AI-generated reviews influences RS performance and business outcomes. We analyze two distinct pathways through which AI content can enter RSes: user-centric, in which individuals use AI tools to refine their reviews, and platform-centric, in which platforms generate synthetic reviews directly from structured metadata. Using a large-scale dataset of hotel reviews from TripAdvisor, we generate synthetic reviews using LLMs and evaluate their impact across the training and deployment phases of RSes. We find that AI-generated reviews differ systematically from human-authored reviews across multiple textual dimensions. Although both user- and platform-centric AI reviews enhance RS performance relative to models without textual data, models trained on human reviews consistently achieve superior performance, underscoring the quality of authentic human data. Human-trained models generalize robustly to AI content, whereas AI-trained models underperform on both content types. Furthermore, tone-based framing strategies (encouraging, constructive, or critical) substantially enhance platform-generated review effectiveness. Our findings highlight the strategic importance of platform control in governing the generation and integration of AI-generated reviews, ensuring that synthetic content complements recommendation robustness and sustainable business value.
|
Recanati building, room 303. |
| Dec. 23 |
Dr. Natalia Silberstein |
Data Science Team Lead at Teads |
Applied Machine Learning Research in a Large-Scale Advertising Ecosystem
Online advertising platforms like Teads operate in a high-throughput environment, engaging with a billion unique users and processing over one billion content and ad recommendations per second. At this scale, even fractional improvements in algorithmic efficiency tran ISeminar InvitationDec23.pdfslate directly into substantial impacts on platform revenue and stakeholder value. This strong coupling between model performance and business outcomes necessitates a deep and continuous investment in applied scientific research. In this talk, we will provide an overview of the Teads advertising ecosystem and highlight key research initiatives across the ad-serving stack. We will explore the unique machine learning challenges and solutions we have developed, concluding with a discussion on open problems and promising opportunities for future collaboration between academia and the ad-tech industry
|
Recanati building, room 303. |
| Dec. 30 |
Prof. Noam Koenigstein |
TAU |
Towards Faithful and Interpretable Recommender Systems: Recent Advances in Explainable Recommendations
Explainable AI (XAI) plays a crucial role in recommender systems by helping users understand personalized recommendations and fostering trust in these increasingly complex models. Effective explanations must accurately reflect the true reasoning behind recommendations, going beyond simple plausibility. In this talk, Prof. Noam Koenigstein from Tel Aviv University will share recent advances from his research focusing on methods to ensure recommendations are both interpretable and faithful to their underlying logic. He will introduce innovative approaches for generating accurate explanations, evaluating their fidelity rigorously, and uncovering interpretable concepts from complex recommender models.
|
|
| Jan. 06 | Prof. Prof. Claudia V. Goldman | The Hebrew University Business School |
Shaping Digital Trust in AI-made Decisions
From the industrial revolution to the ongoing Fourth Industrial Revolution driven by AI, technological advancements have continuously reshaped society and industry. These advancements are changing the ways we interact with each other (social), with our environment and with our resources (economy). The main driving forces of these revolutions are the technological breakthroughs (from machines, through electricity and the Internet up to AI), which are followed by periods of human adaptation, as societies adjust to the transformative changes these innovations introduce. What if we could shape these human-AI interactions as seamless synergies that are mutually enriching and naturally aligned for both sides (instead of, current unilateral interactions, where the human side adjusts to the technology)? This topic is particularly important nowadays since we are already facing a shift from experts and well-trained professionals interacting with technology (i.e., AI tools and solutions) to a broad spectrum of human users (from experts, through decision makers, to laymen). On one hand, these AI solutions can improve humans’ life (in domains such as medical, transportation, finance and education). On the other hand, humans need to effectively and critically use these digital technologies to gain those benefits. In this talk, I will present our research on explainable AI algorithms for AI agents making decisions under uncertainty while interacting with humans in a variety of domains (transportation, manufacturing and disaster management).
|
|
| Jan. 13 |
Daniella Schmitt |
Nova School of Business and Economics |
Pricing and Consumption in Subscription Settings
This paper investigates how subscription pricing affects consumption intensity, a key performance driver for firms operating under subscription-based business models. Using data from an online news publisher, the study shows that subscribers entering through promotional pricing consume substantially more than regular-price subscribers, even after accounting for differences in churn behavior. The paper develops and estimates an empirical model of subscription and consumption behavior, allowing recovery of consumers’ unobserved willingness to pay. The model is used to evaluate alternative pricing policies and their effects on both subscription revenues and advertising revenues. The findings highlight the importance of understanding how pricing shapes not only who subscribes, but also how much they engage with the product.
|
|
| Feb. 03 |
Dr. Asi Messica |
Shield |
When GenAI Meets the Real World: Practical Lessons from Deploying LLMs in Production
Large Language Models (LLMs) have shown impressive capabilities in controlled settings, yet deploying them in real-world products introduces practical, organizational, and technical challenges. In this talk, I share lessons from deploying LLM-based systems in production across two industry domains: creative visual editing at Lightricks and financial compliance surveillance at Shield. Drawing on three applied research efforts, I examine the gap between LLM capabilities in principle and the requirements of reliable, scalable, and cost-effective systems. I first present JSON Whisperer, a patch-based framework for structured JSON editing that improves efficiency while preserving correctness. I then discuss real-time visual editing solution, where latency and cost constraints motivated an efficient distillation approach that matches the performance of larger proprietary models. Finally, I describe a case study from Shield, showing how hybrid rule-based and LLM systems can enhance compliance workflows while maintaining trust, auditability, and regulatory alignment. Together, these cases highlight recurring themes in industrial GenAI deployment: efficiency-first design, task-specific evaluation and architectural guardrails, and the careful integration of LLMs into production systems to achieve real business impact.
|
|
| Mar. 17 |
Avichay Chriqui |
TAU |
Tba
Tba
|
|
| Apr. 28 |
Osnat Mokryn | Haifa Univ. |
Tba
Tba
|
|
| May 5 |
Prof. Avi Goldfarb |
|
Tba
Tba
|
Past Seminars:

