[{"content":"When you average information from your neighbors, and your neighbors are mostly wrong, you get worse — not better.\nThat sentence sounds obvious when you read it. But an entire class of machine learning models does exactly this, billions of times per second, on some of the most important data structures in computer science. This post is about what happens when we tried to fix that, where it worked, and — honestly — where it didn\u0026rsquo;t.\nWhat Are Graph Neural Networks? A graph is a collection of things (nodes) connected by relationships (edges). Social networks, molecules, citation databases, web pages — all graphs. A graph neural network (GNN) is a neural network that learns by aggregating information from neighbors in a graph. Each node looks at what its neighbors look like, combines that with its own features, and updates itself. Stack a few layers of this, and information flows across the graph.\nThe key operation is aggregation: take your neighbors\u0026rsquo; features, average them, and use the result to update yourself. This is sometimes called \u0026ldquo;message passing\u0026rdquo; — nodes send messages to their neighbors, and each node reads its inbox to decide what it is.\nThe Problem: When Neighbors Disagree This works beautifully when connected nodes tend to be similar — papers cite related papers, friends share interests. This property is called homophily (\u0026ldquo;love of the same\u0026rdquo;), and most popular GNN architectures were designed with it in mind.\nBut not all graphs are homophilous. In fraud detection, fraudsters connect to legitimate users, not to each other. In dating networks, well, you get the idea. When connected nodes tend to have different labels, the graph exhibits heterophily (\u0026ldquo;love of the different\u0026rdquo;).\nHere is the concrete failure mode. Suppose you are a node in class A, and 80% of your neighbors are class B. A standard GNN averages your neighbors\u0026rsquo; features and hands you the result. That average looks like class B. You just got worse at knowing you are class A. The graph structure that was supposed to help you has actively misled you.\nThis is not a theoretical concern. On heterophilous benchmark datasets, a plain MLP (a multi-layer perceptron — a neural network that completely ignores the graph and only looks at each node\u0026rsquo;s own features) often beats GNNs. The graph is hurting, not helping.\nOur Idea: Route Neighbors by How Different They Look The core insight behind our method, CSNA (Cost-Sensitive Neighborhood Aggregation), is simple: before you aggregate, measure how different each neighbor looks from you in a learned feature space. Then route neighbors through different channels depending on that measurement.\nFor each edge, we compute a cost estimating how likely two connected nodes are to disagree. This cost has two parts: one based on how different the nodes\u0026rsquo; current representations look (observable divergence), and one the model learns (a heuristic estimate of unreliability). The default version uses only the observable divergence; an optional extension adds the learned estimate, but in practice we found the simpler version works just as well.\nBased on this cost, each neighbor\u0026rsquo;s message gets soft-routed into one of two channels:\nA concordant channel for neighbors that look similar to you (low cost) — these get standard aggregation, because they probably agree with your label. A discordant channel for neighbors that look different (high cost) — these get a separate transformation, because blindly averaging them in would dilute your signal. A learned gate then decides, for each node, how much to trust each channel versus its own features. On a heterophilous graph, the model can learn to down-weight the concordant channel (few useful same-class neighbors) and up-weight its own features or the separately-processed discordant signal.\nWhere It Worked — and Where It Didn\u0026rsquo;t We tested CSNA on six standard heterophily benchmarks, comparing against seven baselines including GCN, GAT, GraphSAGE, MLP, and several methods specifically designed for heterophily (H2GCN, GPRGNN, ACM-GNN).\nThe wins. On the adversarial-heterophily datasets, CSNA was competitive with the best methods. It achieved the highest accuracy on Cornell (72.7%) and was statistically tied for first on Actor (35.7% vs. GPRGNN\u0026rsquo;s 36.0% — within one standard deviation). On Texas (77.0%) and Wisconsin (79.6%), it was within a few points of the best. All methods were tuned over the same hyperparameter grid for a fair comparison.\nThe failures. On Chameleon and Squirrel, CSNA scored 54.6% and 37.8% respectively — well below plain GCN, which hit 67.3% and 53.4%. This was not just our method failing; every heterophily-specific method (H2GCN, GPRGNN, ACM-GNN) also lost to GCN on these two datasets.\nThe Failure Is the Finding Why does CSNA fail on Chameleon and Squirrel? Because there are two fundamentally different kinds of heterophily, and our method only handles one.\nAdversarial heterophily is when your different-class neighbors are genuinely misleading. Their features look nothing like yours, and averaging them in corrupts your signal. On datasets like Cornell and Texas (small university webpage networks), this is the dominant pattern. CSNA handles this well — it identifies misleading neighbors by cost and routes them away from the main aggregation.\nInformative heterophily is when being connected to different-class neighbors is itself a useful signal. On Chameleon and Squirrel (Wikipedia article networks), the pattern of which classes your neighbors belong to is highly informative, even though those neighbors have different labels. A node surrounded by class B and class C neighbors might reliably be class A — not because B and C features help, but because the structure of having B and C neighbors is the clue. Standard GCN captures this structural information through its aggregation. CSNA, by trying to separate and re-route different-looking neighbors, actually disrupts the structural signal that makes these datasets learnable.\nThis distinction — adversarial vs. informative heterophily — is not new to our paper, but our results make it concrete. If a cost-sensitive routing method helps on your dataset, your heterophily is probably adversarial. If it hurts, the heterophily is probably informative, and you should use a method that preserves structural patterns rather than filtering them.\nWhat We Learned Three takeaways from this work:\nNot all heterophily is the same. The label \u0026ldquo;heterophilous graph\u0026rdquo; hides a crucial distinction. Methods that assume different-class neighbors are harmful will fail when those neighbors are informative. The field needs better ways to characterize what kind of heterophily a graph exhibits before choosing a method.\nHonest failure analysis is underrated. Reporting where and why CSNA fails told us more about the problem than the wins did. The two failing datasets are exactly where the entire \u0026ldquo;route-by-similarity\u0026rdquo; paradigm breaks down.\nSimple mechanisms can be diagnostic tools. The cost function\u0026rsquo;s ability (or inability) to separate edge types serves as a diagnostic for the heterophily regime — revealing what kind of graph you\u0026rsquo;re dealing with before committing to an architecture.\n\u0026ldquo;Cost-Sensitive Neighborhood Aggregation for Heterophilous Graphs: When Does Per-Edge Routing Help?\u0026rdquo; by Eyal Weiss, Technion. arXiv preprint, 2026.\nCode and data: github.com/eyal-weiss/CSNA-public ","permalink":"https://eyal-weiss.github.io/blog/2026-03-31-per-edge-routing-gnns/","summary":"A cost-sensitive neighborhood aggregation method for GNNs that routes neighbors by similarity — and what its failures reveal about two fundamentally different kinds of heterophily.","title":"When Your Neighbors Are Wrong, Listening to Them Makes You Worse"},{"content":"Picture a warehouse floor. Dozens of robots scurry between shelves, picking up packages and delivering them to new locations. The warehouse layout needs to change — maybe seasonal inventory is rotating, or a new batch of products just arrived and the shelves need reshuffling. Every robot must coordinate with every other robot to avoid collisions, and the goal is simple: finish the rearrangement as fast as possible.\nThis is the Multi-Agent Warehouse Rearrangement (MAWR) problem, and it turns out that the way you think about it determines how well you can solve it.\nThe natural approach (and its limitation) The intuitive way to tackle this is agent-centric: assign each robot a task (\u0026ldquo;Robot 3, take the red package from shelf A to shelf D\u0026rdquo;), plan a collision-free path for each robot, and send them on their way. This is essentially what existing methods do — they treat it as a variant of multi-agent path finding, where the robots are the main characters and the packages are along for the ride.\nThis works, and it\u0026rsquo;s fast. But it leaves performance on the table. Why? Because once you assign a task to a specific robot, you\u0026rsquo;ve locked in a commitment. Robot 3 must carry the red package the entire way, even if Robot 7 happens to be passing right by the halfway point and could easily take over.\nFlipping the script In our recent paper (which received the Best Paper Award at SoCS 2025), my co-authors Yaakov Sherma, Oren Salzman, and I proposed a surprisingly simple shift in perspective: plan for the packages, not the robots.\nInstead of asking \u0026ldquo;where should each robot go?\u0026rdquo;, we ask \u0026ldquo;how should each package move?\u0026rdquo;\nFirst, we compute optimal, collision-free paths for the packages themselves — as if the packages were the agents. Then, we figure out which robots should execute each movement using a network-flow algorithm. If some package movement can\u0026rsquo;t be carried out by any available robot, we feed that information back and adjust the package paths accordingly.\nThis is the core of our algorithm, NAT-CBS (Non-Atomic Task Conflict-Based Search).\nWhy \u0026ldquo;non-atomic\u0026rdquo; matters The word non-atomic captures the key advantage. In traditional approaches, a task is atomic: one robot picks up a package and carries it all the way to its destination. In our approach, tasks are non-atomic: Robot 1 might carry a package halfway across the warehouse, set it down, and Robot 2 — who happens to be nearby — picks it up and finishes the job.\nThis isn\u0026rsquo;t just a theoretical nicety. Consider a small example: a 3×3 grid with four colored packages and two robots. The packages need to be shuffled to new positions. An agent-centric approach finds a solution in 11 timesteps. Our obstacle-centric approach finds the optimal solution in just 8 timesteps — a 27% improvement — precisely because it allows robots to hand off packages mid-transit.\nDoes it actually work? We tested NAT-CBS against the state-of-the-art method (MAPF-DECOMP) across hundreds of randomly generated warehouse instances on 8×8 and 15×20 grid maps.\nThe results were striking. NAT-CBS consistently produces better plans — in many instances, the competing method\u0026rsquo;s makespan was 1.5× to 3× worse than optimal. The gap grows with problem complexity: the more packages that need moving, the more the agent-centric approach struggles with suboptimal task assignments, while the obstacle-centric view continues to find efficient coordinated solutions.\nThe trade-off is runtime. NAT-CBS is significantly slower because it\u0026rsquo;s solving a harder problem — it guarantees optimality rather than settling for a quick heuristic answer. Think of it as the difference between finding a route through traffic versus finding the fastest route. The latter takes more computation but can save significant time in execution.\nThe bigger picture What I find most interesting about this work is the meta-lesson: sometimes the best way to solve a coordination problem is to stop focusing on the actors and start focusing on what needs to happen. The robots are interchangeable — any robot can move any package. By planning around the packages (the things that actually need to reach specific locations), we unlock a more natural and efficient way to coordinate the whole system.\nThis kind of perspective shift — from who does the work to what work needs doing — likely extends beyond warehouses to other multi-agent coordination domains. And with growing interest in bounded-suboptimal variants that trade a small amount of optimality for dramatically faster computation, practical deployment may not be far off.\nY. Sherma, E. Weiss, O. Salzman. \u0026ldquo;From Agent Centric to Obstacle Centric Planning: A Makespan-Optimal Algorithm for the Multi-Agent Warehouse Rearrangement Problem.\u0026rdquo; SoCS 2025. Paper · Code ","permalink":"https://eyal-weiss.github.io/blog/2026-03-23-warehouse-rearrangement/","summary":"A simple change in perspective — planning paths for items instead of robots — leads to provably optimal warehouse rearrangement and up to 2x faster completion times.","title":"What if warehouse robots planned around the packages, not themselves?"},{"content":"Have you ever watched a robot try to perform a task in the real world—perhaps a robotic arm trying to grab a specific object out of a cluttered bin, or a humanoid robot trying to navigate a messy room? If you have, you might have noticed a slight hesitation. The robot pauses, computes, moves a little, pauses again, and then commits to the action. That pause isn\u0026rsquo;t uncertainty; it\u0026rsquo;s intense calculation. The robot is desperately trying to figure out how to get from Point A to Point B without smashing its elbow into a table or colliding with a human.\nThis process is called \u0026ldquo;motion planning,\u0026rdquo; and for decades, it has been a major bottleneck in robotics. Robots can move fast, but they think slow.\nBut thanks to brilliant relatively new research (appeared on late 2023, published in 2024 IEEE International Conference on Robotics and Automation (ICRA)), that robotic \u0026ldquo;thinking pause\u0026rdquo; might soon be a thing of the past. A recent paper titled \u0026ldquo;Motions in Microseconds via Vectorized Sampling-Based Planning,\u0026rdquo; authored by Wil Thomason, Zachary Kingston, and Lydia E. Kavraki, has introduced a technique that drastically speeds up how robots plan their movements. How drastic? They\u0026rsquo;ve taken planning times from milliseconds (thousandths of a second) down to microseconds (millionths of a second).\nHere is a simple breakdown of how they did it, and why it\u0026rsquo;s a game-changer for the future of machines.\nThe Problem: Connecting the Dots in the Dark To understand the breakthrough, we first need to understand the problem.\nWhen a robot needs to move, it doesn\u0026rsquo;t just \u0026ldquo;see\u0026rdquo; the path like we do. It has to mathematically test thousands of possibilities. Imagine you are in a pitch-black, crowded room, and you need to get to the exit. You can\u0026rsquo;t see the obstacles. The only way to find a safe path is to reach out your hands and check random spots in front of you. \u0026ldquo;Is this spot clear? Yes. Okay, is that spot clear? No, that\u0026rsquo;s a chair.\u0026rdquo;\nTraditional robot motion planning (specifically a popular type called \u0026ldquo;sampling-based planning\u0026rdquo;) works a bit like this. The robot\u0026rsquo;s computer randomly picks points in space and checks: \u0026ldquo;If I move my arm here, will I hit anything?\u0026rdquo; It has to ask this question thousands of times to build a \u0026ldquo;connect-the-dots\u0026rdquo; map of safe passage.\nChecking every single dot takes time. If the environment is complex, the calculations pile up, and the robot has to pause to think.\nThe Solution: The Supermarket Analogy Computers get faster by doing things in parallel—doing multiple jobs at once. Until now, robotics has mostly relied on \u0026ldquo;coarse-grained\u0026rdquo; parallelism. Think of a supermarket. If the line is long, the manager opens more checkout lanes. Now you have four cashiers working simultaneously. That\u0026rsquo;s faster, but each cashier is still scanning items one by one, beep\u0026hellip; beep\u0026hellip; beep.\nThe breakthrough by Plancher, Wilcox, and Manchester utilizes something different: fine-grained parallelism, specifically through a technique called vectorization. Imagine we go back to that single cashier. But instead of a regular scanner, we give them a futuristic \u0026ldquo;super-scanner.\u0026rdquo; When they wave it over a cart, it doesn\u0026rsquo;t beep once; it instantly scans 16 items simultaneously in a single BEEP.\nThe researchers found a way to apply this \u0026ldquo;super-scanner\u0026rdquo; approach to the robot\u0026rsquo;s safety checks. Modern CPUs have special features (called SIMD instructions) designed to do this kind of math, but they are notoriously difficult to apply to the irregular, messy problems of robot motion planning. The brilliance of this paper lies in reorganizing the math so the robot\u0026rsquo;s computer isn\u0026rsquo;t asking, \u0026ldquo;Is this one point safe?\u0026rdquo; It\u0026rsquo;s asking, \u0026ldquo;Are these 16 points safe?\u0026rdquo; and getting the answer for all of them at the exact same instant.\nThe Implications: Robots That React in Real-Time By effectively letting the robot check 16 (or more) times as many possibilities in the same amount of time, the speed limits on robotic thinking have been blown off.\nWhy does shifting from milliseconds to microseconds matter?\n1. Truly Dynamic Robots Today, if you throw a ball at a standard research robot, it will likely freeze. By the time it plans a path to catch the ball, the ball has already sailed past. The world changed faster than the robot could think.\nWith microsecond planning, a robot can re-plan its entire movement path hundreds of times during the action. If the environment changes—a person steps in the way, or the target moves—the robot can instantly adapt its path smoothly without stopping.\n2. Smarter Control Systems In advanced robotics, there is a technique called Model Predictive Control (MPC). It\u0026rsquo;s basically the robot constantly asking, \u0026ldquo;Given what just happened, what is the best thing to do for the next few seconds?\u0026rdquo; To work well, MPC needs to run very, very fast.\nPreviously, motion planning was too slow to be used directly inside this tight control loop. This new vectorized approach is so fast that high-level planning can now happen at the same speed as low-level motor control. This merges \u0026ldquo;planning\u0026rdquo; and \u0026ldquo;doing\u0026rdquo; into a single, seamless process.\n3. More Complex Machines Generally, the more joints a robot has, the harder it is to plan its motion. A snake robot is harder to control than a simple robotic arm.\nThis new speed boost means we can control highly complex robots just as quickly as simpler ones, opening the door for more capable designs in unstructured environments like disaster zones or homes.\nA New \u0026ldquo;Speed of Thought\u0026rdquo; for Machines The work by Thomason, Kingston, and Kavraki is a fantastic example of unlocking hidden potential in the hardware we already have. By cleverly rethinking the mathematics of movement to fit modern processor architecture, they have given robots a massive upgrade in their \u0026ldquo;speed of thought.\u0026rdquo;\nThe next time you see a robot moving fluidly, adapting instantly to a chaotic world, remember that the secret might just be fine-grained parallelism, performing motions in microseconds.\n","permalink":"https://eyal-weiss.github.io/blog/2025-12-31-motion-planning-robots/","summary":"\u003cp\u003eHave you ever watched a robot try to perform a task in the real world—perhaps a robotic arm trying to grab a specific object out of a cluttered bin, or a humanoid robot trying to navigate a messy room? If you have, you might have noticed a slight hesitation. The robot pauses, computes, moves a little, pauses again, and then commits to the action. That pause isn\u0026rsquo;t uncertainty; it\u0026rsquo;s intense calculation. The robot is desperately trying to figure out how to get from Point A to Point B without smashing its elbow into a table or colliding with a human.\u003c/p\u003e","title":"Motion planning time for robots is almost immediate"},{"content":"Research I am a Postdoctoral Scholar at the Computational Robotics Lab (CRL) in the Computer Science Department at Technion — Israel Institute of Technology , working with Prof. Oren Salzman .\nMy research focuses on Planning, Search \u0026amp; Optimization, with applications in AI, robotics, and combinatorial problems. I develop algorithms for generalized automated planning with dynamic action models, combining tools from AI planning, graph theory, and combinatorial optimization.\nCurrent Research Areas Search \u0026amp; AI Planning — generalized shortest-path problems, heuristic search, numeric planning Motion Planning for Robotics — sampling-based planning, bidirectional search, lazy evaluation Multi-Agent Systems — warehouse rearrangement, coordinated planning Combinatorial Optimization — pattern databases, multi-objective optimization Education Ph.D. in Computer Science — Bar-Ilan University, supervised by Prof. Gal A. Kaminka (MAVERICK group) M.Sc. in Electrical Engineering — Tel Aviv University, supervised by Prof. Michael Margaliot B.Sc. in Electrical Engineering — Tel Aviv University Awards Best Paper Award — International Symposium on Combinatorial Search (SoCS), 2025 Service Workshop Organizer: HSDIP Workshop at ICAPS 2025 (Melbourne); RDDPS Workshop at ICAPS 2024 (Banff) and ICAPS 2023 (Prague) Contact Email: eweiss@campus.technion.ac.il Office: Taub Building, Room 744, Technion, Haifa 3200003, Israel ","permalink":"https://eyal-weiss.github.io/about/","summary":"About Eyal Weiss","title":"About"},{"content":"This website is exempt from mandatory accessibility requirements under the applicable amendment to the internet accessibility regulations. It does not provide any services and is operated solely by a private individual, rather than by a nonprofit organization or a commercial entity.\n","permalink":"https://eyal-weiss.github.io/accessibility/","summary":"\u003cp\u003eThis website is exempt from mandatory accessibility requirements under the applicable amendment to the internet accessibility regulations. It does not provide any services and is operated solely by a private individual, rather than by a nonprofit organization or a commercial entity.\u003c/p\u003e","title":"Accessibility Statement"},{"content":"Email: eweiss@campus.technion.ac.il Office: Taub Building, Room 744, Technion, Haifa 3200003, Israel\nHow to find me on The Web:\nGoogle Scholar LinkedIn ResearchGate YouTube Semantic Scholar Web of Science dblp ORCiD GitHub I also have an active X account which is mostly for non-academic discussions. Feel free to contact me.\n","permalink":"https://eyal-weiss.github.io/contact/","summary":"Contact","title":"Contact"},{"content":"Being a researcher requires developing a number of skills that are not part of the standard curriculum of academic courses. This includes: Writing scientific papers, Preparing posters, Delivering oral presentations, Reviewing papers.\nSharpening these skills can make a significant impact on the exposure of your work and help build reputation. Over the last few years I have come across valuable resources that provide concrete suggestions and tips from highly respected scholars, which I am happy to share:\nWriting papers with mathematical content by John N. Tsitsiklis contains much needed guidance on effective writing for young researchers (and a few more helpful pointers appear in the link) Delivering oral presentations by Patrick Henry Winston is a MUST-watch video! Mathematical English by Jan Nekovář is a concise introduction, filled with examples, for describing many common mathematical terms in English Writing papers and giving talks by Wheeler Ruml is a helpful video containing various high-level suggestions Advices for PhD students by Sven Koenig is an hour-video filled with gems Oral and poster presentation by Reuven Boxman gives many practical tips for preparing and delivering presentations and posters, with a focus on getting the right attention at conferences Mistakes reviewers make by Niklas Elmqvist aids in understanding the peer review process, and provides concrete guidance on how to be a better reviewer A few more practical tips of my own:\nDoctoral consortiums — If you have the option to participate in a doctoral consortium that is held under the umbrella of a conference that relates to your research field, then you should definitely go for it. These are events that have curated materials for PhD students, with the goals of increasing exposure to the relevant research community (which is important!) and providing with concrete tools for doing more effective research. Direct feedback from experienced scholars in your research community is valuable! From a timing perspective, it is more useful to aim at participation after obtaining some results which you can share, which is typically a year into the PhD.\nSummer/winter schools — Concentrated summer/winter schools in topics that are of interest to you are also useful, especially as they usually don\u0026rsquo;t require preparation, which makes them low-overhead and high-gain. A fine combination.\nTeach a course! — Even if you are not thrilled about the idea, it is still worth holding a teaching position in a university for at least a semester or two. I guarantee that you will learn a lot from the process. Research and teaching are closely related, so there is significant cross-fertilization.\n","permalink":"https://eyal-weiss.github.io/toolbox/","summary":"Graduate Student Toolbox","title":"Graduate Student Toolbox"},{"content":"News: announcements and ongoing projects\nWorking on a very exciting research project that analyzes LLMs using\u0026hellip; I\u0026rsquo;ll share once it\u0026rsquo;s ready :)\nSubmitted my first paper in the field of Graph Neural Networks (technique inspired by serach ideas).. check out the blog.\nSubmitted a cool \u0026amp; semi-educational paper to IROS 2026.\nOpened a blog 🥳 The idea is to reflect ongoing developments in the field of robotics, through my POV, in simple language. Once in a while I will also write about other (non-robotics-related) technical aspects of my work (AI, planning, search, control).\nSubmitted several grant proposals. Further details will be released upon notification. Update: Toyota research grant approved :)\nPhysical integration of our group\u0026rsquo;s motion planner with our prototype snake-like robot was successfully achieved.\nStarted work on extending our 2025 SoCS paper \u0026ldquo;From Agent Centric to Obstacle Centric Planning: A Makespan-Optimal Algorithm for the Multi-Agent Warehouse Rearrangement Problem\u0026rdquo; in a few natural sub-optimal directions.\nExtension of our 2025 IJCAI paper \u0026ldquo;Bidirectional Search while Ensuring Meet-In-The-Middle via Effective and Efficient-to-Compute Termination Conditions\u0026rdquo; is almost done 😊. It extends both theory to provide better explanations to our results and empiric evaluation with additional domains, metrics and another ablation study.\nSubmitted a really cool paper to ICRA 2026! More details will be released soon 😀 Update: ICRA -\u0026gt; WAFR.\nWorking on a high-performance motion planner that unifies many proven techniques: laziness, sampling-based, incremental computation, anytime search scheme (and more).\nOur joint (cross-faculty collaboration between computer science, mechanical engineering and civil engineering) project on \u0026ldquo;Autonomous Multi-Stable Robot (MSR) for Search and Exploration\u0026rdquo; was renewed for another year 🥳. This project revolves around developing a physical MSR, which resembles a snake-like robot, and various capabilities for it \u0026ndash; actuation, sensing, state estimation, motion planning and more. It is challenging and brings a lot of fun!\nWorkshop organization:\nCo-organizing RIPL Workshop at ICAPS 2026 (July, Montréal) Co-organized HSDIP Workshop at ICAPS 2025 (November, Melbourne) Co-organized RDDPS Workshop at ICAPS 2024 (June, Banff) Co-organized RDDPS Workshop at ICAPS 2023 (July, Prague) ","permalink":"https://eyal-weiss.github.io/news/","summary":"News","title":"News"},{"content":"Open Access Philosophy:\nI am a strong proponent of open access research. In my opinion, scientists should make every effort to make their research openly available online, to facilitate rapid and easy exchange of ideas, especially when the research is conducted in publicly funded institutes. For this reason, all my articles are either available through open access publication platforms, such as AAAI Publications , or when officially published in closed access platforms, open access is possible through arXiv . All versions are accessible through my Google Scholar page. Similarly, all the software produced relating to our research is fully accessible with standard open source license that supports free distribution.\n","permalink":"https://eyal-weiss.github.io/open-access/","summary":"Open Access","title":"Open Access"},{"content":"For a full list, see my Google Scholar profile.\nTechnical Companions — For many of my papers I have prepared undergraduate-accessible guides that explain every equation, theorem, and algorithm step by step, with symbol breakdowns and numerical examples. Look for the Technical Companion (PDF) links below.\nCombinatorial Search Bidirectional Search while Ensuring Meet-In-The-Middle via Effective and Efficient-to-Compute Termination Conditions · Technical Companion (PDF) Y. Wang, B. Mu, E. Weiss, O. Salzman — IJCAI 2025 bidirectional search meet-in-the-middle termination conditions\nGeneralizing Multi-Objective Search via Objective-Aggregation Functions · Technical Companion (PDF) H. Peer, E. Weiss, R. Alterovitz, O. Salzman — arXiv preprint multi-objective search objective aggregation robotics planning\nTightest Admissible Shortest Path · Technical Companion (PDF) E. Weiss, A. Felner, G. A. Kaminka — ICAPS 2024 shortest path admissible heuristics graph search\nA Generalization of the Shortest Path Problem to Graphs with Multiple Edge-Cost Estimates · Technical Companion (PDF) E. Weiss, A. Felner, G. A. Kaminka — ECAI 2023 shortest path multiple edge costs cost uncertainty\nAI Planning PDBs Go Numeric: Pattern-Database Heuristics for Simple Numeric Planning · Technical Companion (PDF) D. Gnad, L. Alon, E. Weiss, A. Shleyfman — AAAI 2025 numeric planning pattern databases heuristic search\nPlanning with Multiple Action-Cost Estimates · Technical Companion (PDF) E. Weiss, G. A. Kaminka — ICAPS 2023 action-cost estimation dynamic models classical planning\nPosition Paper: Online Modeling for Offline Planning E. Weiss, G. A. Kaminka — RDDPS Workshop, ICAPS 2022 online learning action models planning under uncertainty\nMotion Planning \u0026amp; Robotics To be updated soon 😊\nMulti-Agent Systems From Agent Centric to Obstacle Centric Planning: A Makespan-Optimal Algorithm for the Multi-Agent Warehouse Rearrangement Problem · Technical Companion (PDF) 🏆 Best Paper Award Y. Sherma, E. Weiss, O. Salzman — SoCS 2025 multi-agent planning warehouse rearrangement makespan optimization\nControl Theory \u0026amp; Dynamical Systems A Generalization of Linear Positive Systems with Applications to Nonlinear Systems: Invariant Sets and the Poincaré–Bendixson Property · Technical Companion (PDF) E. Weiss, M. Margaliot — Automatica, 2021 positive systems invariant sets nonlinear systems Poincaré–Bendixson\nOutput Selection and Observer Design for Boolean Control Networks: A Sub-Optimal Polynomial-Complexity Algorithm · Technical Companion (PDF) E. Weiss, M. Margaliot — IEEE Control Systems Letters, 2019 Boolean control networks observer design output selection\nA Polynomial-Time Algorithm for Solving the Minimal Observability Problem in Conjunctive Boolean Networks · Technical Companion (PDF) E. Weiss, M. Margaliot — IEEE Transactions on Automatic Control, 2019 Boolean networks observability polynomial-time algorithm\nMinimal Controllability of Conjunctive Boolean Networks is NP-Complete · Technical Companion (PDF) E. Weiss, M. Margaliot, G. Even — Automatica, 2018 Boolean networks controllability NP-completeness\n","permalink":"https://eyal-weiss.github.io/publications/","summary":"Selected Publications","title":"Selected Publications"},{"content":"Cost-Sensitive Neighborhood Aggregation (CSNA) for Heterophilous Graphs — The code for our new arXiv paper, about per-edge routing in Graph Neural Networks (GNNs), is available here .\nMulti-Agent Warehouse Rearrangement — The code for our SoCS 2025 paper, about optimal multi-agent warehouse rearrangement, is available here .\nBLITstar — The code for the ICRA 2025 paper \u0026ldquo;Asymptotically Optimal Sampling-Based Motion Planning Through Anytime Incremental Lazy Bidirectional Heuristic Search\u0026rdquo;, about motion planning, is available here . It is also incorporated in OMPL .\nMEET — The code for our IJCAI 2025 paper, about bidirectional search, is available here . It will also be incorporated in OMPL .\nNumeric Fast Downward — Our paper on numeric PDBs is implemented in the numeric planner NFD .\nPlanDEM — Papers from my PhD were mostly tested using PlanDEM , a domain-independent planner that provides implementation to algorithms that work with dynamically estimated action models. This is an open source project, that gets major updates typically after a new paper on the matter is published.\nContact me for questions or requests.\n","permalink":"https://eyal-weiss.github.io/software/","summary":"Software","title":"Software"},{"content":"Talks - slides, posters, videos and paper PDFs:\n2025 SoCS 2025 , main track (August, Glasgow): From Agent Centric to Obstacle Centric Planning: A Makespan-Optimal Algorithm for the Multi-Agent Warehouse Rearrangement Problem (Best Paper Award 🥳)\nIJCAI 2025 , main track (August, Montreal): Bidirectional Search while Ensuring Meet-In-The-Middle via Effective and Efficient-to-Compute Termination Conditions Invited talk at SBPL , Carnegie Mellon University (May)\nAAAI 2025 , main track (March, Philadelphia): PDBs Go Numeric: Pattern-Database Heuristics for Simple Numeric Planning 2024 Invited talk at CRL , Technion (June, Haifa) [slides ]: Online Estimation of Edge Costs in Generalized Shortest-Path Problems: Extending Graph Definitions, Optimization Problems and Search Algorithms\nICAPS 2024 , HSDIP Workshop (June, Banff): PDBs Go Numeric: Pattern-Database Heuristics for Simple Numeric Planning ICAPS 2024 , main track (June, Banff) [slides ]: Tightest Admissible Shortest Path 2023 ECAI 2023 , main track (October, Krakow) [slides , poster ]: A Generalization of the Shortest Path Problem to Graphs with Multiple Edge-Cost Estimates SoCS 2023 , Doctoral Consortium (July, Prague) [slides , poster ]: A Generalization of the Shortest Path Problem to Graphs with Multiple Edge-Cost Estimates ICAPS 2023 , main track (July, Prague) [slides ]: Planning with Multiple Action-Cost Estimates ICAPS 2023 , RDDPS Workshop (July, Prague) [slides ]: A Generalization of the Shortest Path Problem to Graphs with Multiple Edge-Cost Estimates ADAMS Conference 2023 (February, Jerusalem) [slides ]: Current PhD lines of work\nBISFAI 2023 (February, Ramat Gan) [slides , poster ]: Current PhD lines of work\n2022 ICAPS 2022 , RDDPS Workshop (June, virtual) [slides , video ]: Planning with Dynamically Estimated Action Costs ICAPS 2022 , RDDPS Workshop (June, virtual) [slides , video ]: Online Modeling for Offline Planning ICAPS 2022 , Doctoral Consortium (June, virtual) [slides , video , poster ]: A Generalization of Automated Planning Using Dynamically Estimated Action Models IAAI 2022 (June, Haifa) [slides ]: Planning with Dynamically Estimated Action Costs\nBar-Ilan CS graduate student meeting 2022 (May, Ramat Gan) [slides ]: Concise introduction to my research \u0026amp; guidance for new MSc/PhD students\nADAMS Conference 2022 (May, Jerusalem) [poster ]: Planning with Dynamically Estimated Action Costs\n","permalink":"https://eyal-weiss.github.io/talks/","summary":"Talks","title":"Talks"},{"content":"I love teaching! Over the last several years I\u0026rsquo;ve taught a variety of courses in engineering and computer science, ranging topics in control theory, signal processing, statistics, optimization, programming and AI. I view teaching as an opportunity to interact with curious students, and to enrich their learning and growth processes.\nSince 2024-2025 I\u0026rsquo;m taking time off from teaching, to fill my energies. Teaching should be done with love, or not at all!\nAlthough I am a strong proponent for open-sourcing academic materials, the universities I teach in utilize a closed system (Moodle) for enlisted students, so the materials are not openly available. Fortunately, there are numerous other (probably better) open sources on The Web. For instance, freely open and curated online materials for introductory statistics are available at OpenStax (which is a truly wonderful educational project, check it out!). In addition, I do have recorded lectures from the last two years, available in my YouTube channel . Students of current (and past!) courses are more than welcome to contact me for assistance and advice.\nMy personal experience, both from participating in classes and teaching them at the academia, has taught me that knowing the course material and having good will, although essential, are typically not enough for doing a good job as a teacher. Adding value to students (on top of the course\u0026rsquo;s written material) also requires preparation, experience and willingness to learn from others. It can take quite some time to reach a high level, where student feedback testifies of satisfaction and interest (personally, I\u0026rsquo;m still working on it 😃). It is thus especially important for young professionals to learn from the experience of others. I strongly recommend to participate in workshops that are regularly held at universities, intended both for graduate students and seasoned academic staff. I sympathize with the teaching philosophy of Aswath Damodaran , and find his presentation on teaching concrete and helpful.\n","permalink":"https://eyal-weiss.github.io/teaching/","summary":"Teaching","title":"Teaching"}]