<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>自动驾驶 on 朝花夕拾</title>
    <link>https://xwlu.github.io/tags/%E8%87%AA%E5%8A%A8%E9%A9%BE%E9%A9%B6/</link>
    <description>Recent content in 自动驾驶 on 朝花夕拾</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Fri, 15 May 2026 14:46:23 +0800</lastBuildDate>
    <atom:link href="https://xwlu.github.io/tags/%E8%87%AA%E5%8A%A8%E9%A9%BE%E9%A9%B6/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>RAD-2: Scaling Reinforcement Learning in a Generator-Discriminator Framework</title>
      <link>https://xwlu.github.io/post/robotics/e2e/rad_2/</link>
      <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/rad_2/</guid>
      <description>想象一下，你在教一个新手司机开车。用模仿学习（Imitation Learning）的方式，就像给他一大堆&amp;quot;老司机录像&amp;quot;反复看——他能学到很多驾驶技巧，但问题是，&lt;strong&gt;他只知道&amp;quot;该怎么开&amp;quot;，却从来没有为自己的失误付出过代价&lt;/strong&gt;。</description>
    </item>
    <item>
      <title>MOSAIC: 基于规模感知的自动驾驶数据挑选魔法阵</title>
      <link>https://xwlu.github.io/post/robotics/e2e/mosaic/</link>
      <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/mosaic/</guid>
      <description>在物理人工智能（Physical AI）领域，特别是自动驾驶，业界一直以来的做法就是 &lt;strong&gt;&amp;ldquo;疯狂喂数据&amp;rdquo;&lt;/strong&gt;。但这就带来了一个巨大的痛点：&lt;strong&gt;&amp;ldquo;偏科&amp;quot;与&amp;quot;效能黑盒&amp;rdquo;&lt;/strong&gt;。</description>
    </item>
    <item>
      <title>DriveTransformer: Unified Transformer for Scalable End-to-End Autonomous Driving</title>
      <link>https://xwlu.github.io/post/robotics/e2e/drive_transformer/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/drive_transformer/</guid>
      <description>想象一下，传统的自动驾驶系统就像一个&lt;strong&gt;刻板的流水线工厂&lt;/strong&gt;：</description>
    </item>
    <item>
      <title>SparseDriveV2: Scoring is All You Need for End-to-End Autonomous Driving</title>
      <link>https://xwlu.github.io/post/robotics/e2e/sparse_drive_v2/</link>
      <pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/sparse_drive_v2/</guid>
      <description>端到端自动驾驶的多模态规划中，江湖上原本分为两派：</description>
    </item>
    <item>
      <title>DiffusionDriveV2: Truncated Diffusion Model for End-to-End Autonomous Driving</title>
      <link>https://xwlu.github.io/post/robotics/e2e/diffusion_drive_v2/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/diffusion_drive_v2/</guid>
      <description>自动驾驶规划面临一个经典的两难困境：&lt;strong&gt;多样性 vs 质量&lt;/strong&gt;</description>
    </item>
    <item>
      <title>End-to-End Autonomous Driving without Costly Modularization and 3D Manual Annotation</title>
      <link>https://xwlu.github.io/post/robotics/e2e/uad/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/uad/</guid>
      <description>在聊 UAD 之前，咱们先看看现有的端到端自动驾驶老大哥们（比如 UniAD）。虽然它们号称&amp;quot;端到端&amp;quot;，但骨子里还是在模仿传统流水线，设计了层层递进的 &lt;strong&gt;感知 → 预测 → 规划&lt;/strong&gt; 子任务。</description>
    </item>
    <item>
      <title>Epona: Autoregressive Diffusion World Model for End-to-End Autonomous Driving</title>
      <link>https://xwlu.github.io/post/robotics/e2e/epona/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/epona/</guid>
      <description>&lt;blockquote&gt;&#xA;&lt;p&gt;端到端自动驾驶的视频生成与轨迹规划&lt;/p&gt;&lt;/blockquote&gt;</description>
    </item>
    <item>
      <title>FutureX: Enhance End-to-End Autonomous Driving via Latent Chain-of-Thought World Model</title>
      <link>https://xwlu.github.io/post/robotics/e2e/future_x/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/future_x/</guid>
      <description>各位乘客请系好安全带！今天我们要深入了解一款拥有&amp;quot;老司机思维&amp;quot;的自动驾驶大模型——&lt;strong&gt;FutureX&lt;/strong&gt;。</description>
    </item>
    <item>
      <title>HiP-AD: Hierarchical and Multi-granularity Planning with Deformable Attention</title>
      <link>https://xwlu.github.io/post/robotics/e2e/hi_p_ad/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/hi_p_ad/</guid>
      <description>现在的端到端自动驾驶（E2E-AD）界有个普遍的怪现象：&lt;strong&gt;&amp;ldquo;应试教育&amp;quot;满分，&amp;ldquo;实战上路&amp;quot;拉胯。&lt;/strong&gt;</description>
    </item>
    <item>
      <title>MomAD: Momentum-Aware Planning in End-to-End Autonomous Driving</title>
      <link>https://xwlu.github.io/post/robotics/e2e/mom_ad/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/mom_ad/</guid>
      <description>&lt;blockquote&gt;&#xA;&lt;p&gt;论文标题：《Don&amp;rsquo;t Shake the Wheel: Momentum-Aware Planning in End-to-End Autonomous Driving》&lt;/p&gt;&lt;/blockquote&gt;</description>
    </item>
    <item>
      <title>MotionLM: Multi-Agent Motion Forecasting as Language Modeling</title>
      <link>https://xwlu.github.io/post/robotics/e2e/motion_lm/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/motion_lm/</guid>
      <description>把自动驾驶的多智能体轨迹预测，变成一场&amp;quot;文字接龙&amp;quot;游戏——用语言模型预测下一个动作词的方式，来预测车辆和行人的未来轨迹。</description>
    </item>
    <item>
      <title>ResAD: Normalized Residual Trajectory Modeling for End-to-End Autonomous Driving</title>
      <link>https://xwlu.github.io/post/robotics/e2e/res_ad/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/res_ad/</guid>
      <description>目前的端到端自动驾驶模型（E2EAD）大多试图回答同一个问题：&amp;quot;&lt;strong&gt;未来的轨迹是什么？&lt;/strong&gt;&amp;quot; 它们直接从传感器数据预测车辆未来几秒钟的绝对坐标点 $(x, y)$。</description>
    </item>
    <item>
      <title>World4Drive - 无需感知标注的端到端自动驾驶世界模型</title>
      <link>https://xwlu.github.io/post/robotics/e2e/world4_drive/</link>
      <pubDate>Tue, 17 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/world4_drive/</guid>
      <description>&lt;blockquote&gt;&#xA;&lt;p&gt;这篇论文的核心思想可以概括为：&lt;strong&gt;如何培养一个会自己&amp;quot;脑补&amp;quot;未来、且极具空间方向感的老司机&lt;/strong&gt;。&lt;/p&gt;&lt;/blockquote&gt;</description>
    </item>
    <item>
      <title>LAW - Enhancing End-to-End Autonomous Driving with Latent World Model</title>
      <link>https://xwlu.github.io/post/robotics/e2e/law/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://xwlu.github.io/post/robotics/e2e/law/</guid>
      <description>通过在潜空间进行动作引导的未来特征预测，实现了无需标注的深度场景特征学习，显著提升了端到端驾驶的规划精度。</description>
    </item>
  </channel>
</rss>
