Technology Management and Information Systems Seminars
| Date | Speaker | Affiliation | Presentation | Room |
|---|---|---|---|---|
| Nov. 11 |
Prof. Avi Seidmann |
Boston University |
Seeing Is Believing: Performance Benchmarking Outperforms Technical Explanations in AI-Assisted Decision-Making under Uncertainty
Despite artificial intelligence's proven superiority in many business decisions, managers still routinely reject AI recommendations (a phenomenon often termed algorithm aversion), leading to billions in foregone value annually. Recent research suggests that making AI systems more interpretable could increase adoption; however, little empirical work has examined whether and how interpretability affects AI adoption in high-stakes, uncertain business environments where decisions are repetitive and outcomes are inherently unpredictable, as is frequently the case when dealing with financial markets. Through two pre-registered, incentive-compatible experiments (N=1,213), we challenge the prevailing assumption that increased interpretability universally promotes AI adoption.
|
Recanati building, room 303. |
| Nov. 18 |
Etzion Harari |
TAU |
“Multi-View Graph Feature Propagation for Privacy Preservation and Feature Sparsity”
Graph Neural Networks (GNNs) have demonstrated remarkable success in node classification tasks over relational data, yet their effectiveness often depends on the availability of complete node features. In many real-world scenarios, however, feature matrices are highly sparse or contain sensitive information, leading to degraded performance and increased privacy risks. Furthermore, direct exposure of information can result in unintended data leakage, enabling adversaries to infer sensitive information. To address these challenges, we propose a novel Multi-view Feature Propagation (MFP) framework that enhances node classification under feature sparsity while promoting privacy preservation. MFP extends traditional Feature Propagation (FP) by dividing the available features into multiple Gaussian-noised views, each propagating information independently through the graph topology. The aggregated representations yield expressive and robust node embeddings. This framework is novel in two respects: it introduces a mechanism that improves robustness under extreme sparsity, and it provides a principled way to balance utility with privacy. Extensive experiments conducted on graph datasets demonstrate that MFP outperforms state-of-the-art baselines in node classification while substantially reducing privacy leakage. Moreover, our analysis demonstrates that propagated outputs serve as alternative imputations rather than reconstructions of the original features, preserving utility without compromising privacy. A comprehensive sensitivity analysis further confirms the stability and practical applicability of MFP across diverse scenarios. Overall, MFP provides an effective and privacy-aware framework for graph learning in domains characterized by missing or sensitive features.
|
Recanati building, room 303. |
| Nov. 18 |
Itzik Ziv |
TAU |
The Impact of LLM-Generated Reviews on Recommender Systems: Textual Shifts, Performance Effects, and Strategic Platform Control
The rise of generative AI technologies is reshaping content-based recommender systems (RSes), which increasingly encounter AI-generated content alongside human-authored content. This study examines how the introduction of AI-generated reviews influences RS performance and business outcomes. We analyze two distinct pathways through which AI content can enter RSes: user-centric, in which individuals use AI tools to refine their reviews, and platform-centric, in which platforms generate synthetic reviews directly from structured metadata. Using a large-scale dataset of hotel reviews from TripAdvisor, we generate synthetic reviews using LLMs and evaluate their impact across the training and deployment phases of RSes. We find that AI-generated reviews differ systematically from human-authored reviews across multiple textual dimensions. Although both user- and platform-centric AI reviews enhance RS performance relative to models without textual data, models trained on human reviews consistently achieve superior performance, underscoring the quality of authentic human data. Human-trained models generalize robustly to AI content, whereas AI-trained models underperform on both content types. Furthermore, tone-based framing strategies (encouraging, constructive, or critical) substantially enhance platform-generated review effectiveness. Our findings highlight the strategic importance of platform control in governing the generation and integration of AI-generated reviews, ensuring that synthetic content complements recommendation robustness and sustainable business value.
|
Recanati building, room 303. |
| Dec. 23 |
Dr. Natalia Silberstein |
Data Science Team Lead at Teads |
Tba
Tba
|
|
| Dec. 30 |
Prof. Noam Koenigstein |
TAU |
Tba
Tba
|
|
| Jan. 13 |
Daniella Schmitt |
Nova School of Business and Economics |
Tba
Tba
|
|
| Jan. 27 |
Dr. Asi Messica |
Shield |
Tba
Tba
|
|
| Mar. 17 |
Avichay Chriqui |
TAU |
Tba
Tba
|
|
| Apr. 28 |
Osnat Mokryn | Haifa Univ. |
Tba
Tba
|
|
| May 5 |
Prof. Avi Goldfarb |
|
Tba
Tba
|
Past Seminars:

