TREEZE Astreze Leo P6 Robobus

80–90% of South Korea’s self-driving vehicles were retrofitted by TREEZE; today, the Astreze Leo P6 Robobus is the foremost of that company’s own AV products
(Image courtesy of TREEZE)

Six flying dragons

Rory Jackson investigates this driverless six-passenger bus, the jewel in the crown for one of South Korea’s leading and longest-running autonomous vehicle companies

The Republic of Korea is often cited as the most technologically advanced nation on Earth. Its population holds one of the largest concentrations of high-speed internet users globally, and its major corporate tech names such as Samsung, Hyundai and LG continue to lead much of the world in their innovations across consumer electronics, automotive systems and the digital economy.

Much of this is key to South Korea’s leadership in autonomy, which is also supported by the government’s subsidisations, apt regulatory approach and dedication to key enabling infrastructure for autonomous vehicles (AVs) such as V2X (vehicle-to-everything) communications and smart roads. Thus, the country’s AV scene has featured countless self-driving car trials and active public transport services over the previous several years.

And although a great many companies have contributed to that progress, TREEZE arguably stands above the rest in terms of labour-hours devoted to driverless mobility. Around 80–90% of the country’s self-driving vehicles have been retrofitted by TREEZE, and since its founding as a hardware engineering company in 2013, it has taken on more than a hundred projects across South Korea – including garbage trucks, tractors and construction vehicles – and developed a reputation as the ‘go-to’ partner for making vehicles autonomy-ready.

Over the years, the company has diversified into various automotive equipment, even producing automated cleaning robots for semiconductor factories (including Samsung’s), to avoid putting all its eggs in the basket of the potentially volatile AV industry.

Despite that policy, TREEZE’s collective enthusiasm for driverless autonomy has only increased with time. Its mobility work today comprises around 80% of its business, and two years ago, its owners decided to expand its competencies to include AV software engineering – the final ingredient needed for them to start producing their own self-driving road vehicles.

As Vinjohn Chirakkal, head of TREEZE’s autonomous driving division tells us: “TREEZE started initially with confined ODDs [Operational Design Domains], but brought me in to lead on a much wider spectrum of vehicles and ODDs because there was a clear market for sustainable, self-driving, multifunctional vehicles, particularly transport or logistics vehicles doing ‘point A to point B’ operations, and moreover, some clear problems that they felt we could solve together.

“The biggest one by far was: why the cost? Even today, most AVs and their services are really expensive, prohibitively so in terms of commercial feasibility; even $400,000–$500,000 per roboshuttle in some cases, with $30,000–$40,000 in annual maintenance fees. You can’t scale a business like that; the clientele that can afford it is just too narrow. So, we were determined to achieve much lower costs while still producing excellent hardware and software relative to much of the rest of the field.”

A key step towards that has been TREEZE’s partnership with PIX Moving in China, the latter having produced a cost-effective platform in the PIX RoboBus, which the South Korean company identified as the perfect substrate for its own technologies and optimisations.

“We are actually designing our own low-speed electric mobility platforms for different markets and ODDs, but we have to enter the market with a functioning product first. We want to step onto the world stage with an affordable AV first, and then once we have a client base that understands the quality of our tech, we can look towards enticing them with pricier, fully homegrown vehicle designs,” Chirakkal muses.

Through an MoU and successful close interactions, TREEZE today is the sole legal distributor of PIX RoboBuses in South Korea, which it uses to build the TREEZE Astreze (formerly ‘ASTRA’) Leo P6 Robobus – its six-passenger launch AV.

This autonomous pod vehicle measures roughly 3810 x 2220 x 1960 mm, with a cabin height of 1780 mm, floor height of 360 mm, seat height of 430 mm and a

4.8 m turning radius. It is fully electric, running on a 31.94 kWh battery pack, which fast charges to full in one and a half hours (or five hours with a slower, more sustainable charge), and its electric motors and driving software together maintain a top speed of 30 kph, with 140 km maximum range between charges.

“And thanks to how we’ve engineered it, we aim to price the Leo P6 at $200,000–$300,000 per unit,” Chirakkal says. “Our nomenclature uses ‘P6’ with PIX being the OEM, and six being the passenger total.”

From RoboBus to P6

TREEZE began contact with PIX in early 2024, and received its first PIX RoboBus units (incorporating its requested custom sensor layout) later that year, thereafter beginning the work of porting and refining its autonomy stack into the vehicle.

“The first big hurdle was licensing. Any vehicle on public roads here needs to go through quite excruciating checklists for hardware and software integrity. That pipeline of standards compliance is ongoing, but we’ve completed first trials in geofenced proving tracks, and we anticipate completing public road certification by the end of this year,” Chirakkal notes.

The PIX RoboBus is typically built upon PIX Moving’s Ultra-Skateboard Chassis Platform, although the platform has been modified in certain ways to TREEZE’s preferences.

The body has been built with a low-alloy high-strength steel frame for structural durability, as well as safety and mounting bays for TREEZE’s sensor suite. That incorporates Lidars, HD cameras and INS modules to support autonomous perception, localisation and decision-making functions.

A variety of qualities are key in easing TREEZE’s own hardware and software integration work when PIX’s vehicles arrive in-house. For one, the mechanical structure has been designed for modularity and accessibility, which the South Korean company tells us makes the RoboBus highly suitable for sub-component integration and for achieving differing automotive safety, physical and operational targets.

Interior mounting points and power interfaces (both 12 and 24 V) are reserved to simplify installation of subsystems such as HMI panels, routers or additional computing hardware. At its foundation, a load-bearing truss frame (made of a low-alloy high-strength structural steel, similarly to the body) within the platform is designed to support a non-load-bearing body, allowing for flexible modifications while still giving structural rigidity. Additionally, the steel body is built with a cage-structured design to maximise passenger protection in the event of an impact.

While the vehicle can carry six passengers, its interior can be customised – by PIX as OEM, as requested by TREEZE and its customers – to support a range of business models including shuttling, on-demand retail and mobile services per Chirakkal’s earlier allusion to the market demand for multifunctional, sustainable transport.

The cabin is built with a 360° glass view for a satisfying passenger experience (that being a crucial factor towards social acceptance of driverless transport vehicles), as well as ambient lighting, touchscreen controls and adjustable air conditioning functions. Additionally, passenger accessibility is optimised into the design with wide sliding doors and a low step-in height.

The Leo P6 is built upon the PIX Moving RoboBus, as a highly adaptable platform that could integrate and run  on TREEZE’s technological IP
(Image courtesy of PIX Moving)

The platform also integrates a double wishbone independent suspension design to optimise for ride comfort including on uneven roads or the inherent variability of urban environments, with that quality being further aided by its four 22 in (56 cm) aluminium alloy wheels and a high ground clearance achieved through its 360 mm floor height.

Between the interior and the exterior, the high-voltage electric systems have been subjected to extensive safety validations and road tests, with the related safety systems achieving compliance with the EU’s R100 standards on EV battery systems.

Functional safety hardware & control

As we have seen in previous driverless vehicle features, functional safety is typically the most critical area of technical compliance to which AV companies must comply, with ISO 26262 and other region-specific standards dictating much of the control integrity in advanced mobility solutions.

To minimise faults between software and hardware, the Leo P6’s entire drivetrain and control systems – including steering, braking and acceleration – are fully governed through a comprehensive suite of drive-by-wire systems, which includes multi-layered safety redundancies compliant with key standards on functional safety and low-speed autonomous vehicles.

For example, the steer-by-wire system integrates redundant connections to and from the Leo P6’s vehicle control unit (VCU), as well as an electric power-assisted steering system that supports front, four-wheel and wedge steering with ±1° accuracy and response times of up to a maximum of 120 ms. TREEZE adds that algorithmic commands engineered to resemble a ‘virtual steering wheel’ enable fine and continuous steering control for nimble traffic negotiation.

The PIX RoboBus integrates a double wishbone independent suspension design with four 22 in (56 cm) aluminium alloy wheels and R100 standards-compliant high-voltage safety systems
(Image courtesy of PIX Moving)

The brake-by-wire meanwhile is dual-circuit, with both an electro-hydraulic brake and an electronic parking brake available in parallel to stop the vehicle, with precise deceleration rates of up to 5.0 m/s² and a response delay no lengthier than 70 ms. Throttle control is also programmed with closed-loop torque control via the VCU and motor controller unit (MCU) and for high response precision (with up to 2% error and 120 ms delay maximums, as TREEZE tells us).

Inputs such as PIX’s readily manufactured platform and Autoware’s open-source autonomy software were key to TREEZE’s vision of an affordably priced AV
(Image courtesy of PIX Moving)

Communications between vehicle subsystems is safeguarded through a three-layer CAN architecture, and electrical safety is supported through hard separation and isolation of the 12 V low-voltage network from the high-voltage battery network, together with an integrated insulation resistance tester device providing real-time monitoring for short-circuits, over-currents and the like.

Each Leo P6 also has three emergency stop buttons – two onboard and one on a remote controller system at TREEZE’s facilities – to ensure manual termination is always an option of last resort. Through that remote system and similar assets, TREEZE’s personnel can engage in real-time monitoring of vehicle status feedback, system diagnostics and troubleshooting, and remote emergency operations or wireless-manual takeover in emergencies.

Autoware Universe

The autonomy software powering the Leo P6’s intelligence has been built upon the open-source Autoware Universe software stack, developed and maintained by the Autoware Foundation (of which TREEZE and many other organisations are members) and originally developed by TIER IV in Japan. The decision to centre much of its autonomy r&d around this solution was one that linked closely to Chirakkal’s background as an engineer.

“In the previous company I worked for, we had this entire ecosystem that we’d built by utilising what we felt were the best open-source tools, and without wanting to appear ‘braggadocious’, I do feel that I’m one of a small handful of engineers with experience in both of the two different, mainstream open-source autonomy software stacks,” Chirakkal says.

One of these is Apollo Auto, the development of which is led by Baidu; Autoware Universe is the other. In any case, it was through the open-source networks that Chirakkal began communications with – and later moved to – TREEZE, thereafter working to apply his know-how in helping build its driverless mobility ecosystem.

“And the biggest reason for TREEZE wanting to go open-source was that, like many companies, it had a finite amount of funds that it could burn for r&d purposes. So, it couldn’t afford to consume huge resources and labour hours in the concept, theory or simulation phases of r&d,” he recounts.

“It needed to start deploying vehicles and gathering real-world data, understanding, and drawbacks – then, maybe, we could afford to get bogged-down in figuring out where to optimise, or where to enhance, based on our IP. So, we put much of our early development focus into understanding the entire open-source ecosystem of Autoware Universe, to understand what drawbacks could be avoided or what enhancements we could achieve, and that focus is what has led to much of the IP of our company and division.”

Additionally, Chirakkal and his team bore in mind that, while the Leo P6 was slated to be (and remains as of writing) the star of TREEZE’s AV portfolio, they were charged to eventually deploy an ecosystem ranging from 25 t Class 8-type trucks down to light passenger sedans and smaller cars.

Hence, the software stack they were customising based on Autoware had to include some manner of foundation for all the other self-driving vehicles to come. They also came to understand key drawbacks of the base open-source stack, such as perception, which they found (in both Apollo Auto and Autoware Universe) to be excessively dependant on high-quality Lidar, and thus inherently designed to drive-up development costs by pushing engineers to build their AVs around pricey, top-shelf Lidar sensors.

“There’re other specific drawbacks of a similar nature to that example, but the fundamental conclusion is that Autoware cannot simply be deployed ‘as-is’ for all these different kinds of vehicles; engineers need to understand the architecture of each vehicle and to develop their own customised ‘modules’ that can fit into the actual platform of technologies inside of the vehicle,” Chirakkal says.

“Hence, much of the software team’s specific engineering has revolved around taking the model from Autoware, which is on-par with YOLO [the You Only Look Once algorithm] in its quality and development history, and retraining and optimising it for our purposes.

“Thanks to what we’ve built and the maturity of our software IP, we’re in a position to make any vehicle – following an appropriate componentry retrofit – run autonomously in a matter of weeks. And we have the expertise in the componentry necessary for picking the right perception, planning, localisation, mapping and control hardware for each use- and cost-case.”

In the case of the Robobus, a key focal point for TREEZE in the pursuit of a reasonably priced AV was avoiding reliance on high-end Lidars. Thus, the software and sensor architectures have been built around a combination of (in the company’s view) mid-range Lidars with cameras – complementing one another with the former’s 3D data and the latter’s colour information – together with inertial and satellite inputs, to generate the best possible data and decisions from the most cost-efficient hardware it could identify.

“That whole pipeline encompasses many decision-making and control technologies, down to our fail-safe behavioural modules, the quality of which are key to getting licensed and certified to operate AVs in South Korea,” Chirakkal notes.

“But underlining all of them has been our wish to move away from a modular approach – as modular as the vehicle’s mechanics and hardware have to be for the time being – and to gradually move towards an end-to-end approach; the use of data-centric models to empower autonomous navigation and intelligent communications. We’re building those models using the data we’re gathering via this first generation of modular AVs, and distilling their intelligence into these models will be key to training the next generation.”

Sensing architecture

Integrated approximately at the four corners of the Leo P6’s roof are four Lidars, each being a 32 channel Ouster Lidar (their performance-to-price ratio being a primary selling point to TREEZE).

“Many parameters like the number of returns, pulses per second or points generated per second were important to us, but the biggest performance factor in our considerations was vertical FoV [field of view],” Chirakkal explains.

“With horizontal FoV, it’s a given that a mechanically spinning Lidar will give you 360° of coverage, but the vertical FoV governs how much you have to tilt the Lidar either as you’re integrating it, or maybe even tilting in real time – as in electromechanically – when you’re out in the world and need a wider FoV to see something really close to you on the ground.”

The number of channels pertaining to vertical resolution were also a high priority for Lidar performance to TREEZE, particularly regarding the angle of deviation between one channel and the next. The fewer the channels, and the sparser the angles between channels, the greater the chances that some small roadside object of critical importance (potentially a child or wheelchair-bound person) might go undetected.

“That said, for future projects, we can customise the sensor stack and the minimum performance requirements of each sensor based on each vehicle and its operational design domain – those latter two factors together defining the maximum speed and manoeuvrability of the vehicle, which the sensors have to be chosen to account for – as we did for the Leo P6,” Chirakkal adds.

Also placed concentrically around the vehicle exterior are five cameras obtained from Sensing-World in China.

“The cameras are installed for various purposes that the Lidars aren’t necessarily as well-suited to; the main ones are blind spot detection, traffic light detection and forward collision warnings,” Chirakkal says.

Five cameras positioned about the vehicle detect blind spots and traffic lights, and also provide forward collision warnings
(Image courtesy of TREEZE)

An INS enabling real-time satellite positioning and inertial measurement inputs is also integrated. As part of its dedication to the Autoware open-source ecosystem, TREEZE has sought to integrate and run on multiple different INSs, engineering compatibility between its software stack and inertial solutions from Hexagon, SBG Systems and CHCNAV.

“Those are the main suppliers we’ve chosen, and we had distinct internally agreed standards on GNSS receiver accuracy and PPS, as well as many specifications pertaining to the quality of their antennas, including persistent connectivity with multiple constellations,” Chirakkal says.

“But the main thing especially was the quality of data fusion between the GNSS, the IMU and the RTK-processing, the latter of which we could not do without. The accuracy of the fused data in terms of its standard deviation is something we assess every INS for before we’re willing to deploy it.”

The inputs from these sensors run into the Leo P6’s main, centralised VCU, which is responsible for all data processing tasks running from perception through to delivery of control outputs to the drivetrain. TREEZE hopes to one day engineer a computer fully optimised for all its algorithms and software modules in-house. For now, it uses a Neousys Nuvo-10208GC industrial PC, running on an Intel CPU and two GPUs, which comes with 64 GB of DDR5 (4800 MT/s) SDRAM, and a plethora of serial (including USB), Ethernet and PCI Express ports.

Branches of perception

The RGB camera data and 3D point cloud Lidar data are combined via a late fusion strategy, meaning both streams are individually processed to output a perception model, and then those two states of perceptions are linked using extrinsic parameters and various particle filter estimation processes that include Kalman filters and Bayesian filters.

“Prior to the actual fusion of Lidar and camera data together, the Lidar data from the four Ouster sensors get combined into a single point cloud, and that then goes through two different branching modules: one is a DNN [deep neural network] module, which is a centre-point-based architecture that gives us 3D detections with classifications of objects like pedestrians, cyclists and vehicles,” Chirakkal explains.

Lidar is a central data source in both perception and localisation for the Leo P6, and choosing affordable Lidars was vital to keeping the AV’s price low
(Image courtesy of TREEZE)

“But the problem with neural-network-derived classifications is that, invariably, amidst what are obviously pedestrians or cyclists, there’s going to be some kind of unknown object that it can’t classify, and you can’t train the module to figure out how to classify those edge-case objects; you’d just overload your computing resources.”

To account for those situations, TREEZE also applies a clustering module to the Lidar data. This essentially means that as the Leo P6 sees objects within its driveable vicinity, it clusters the Lidar point clouds about each of them – including unknown objects in the driving region (such as a wild animal, a dropped pallet of white goods or a misplaced block of portable restrooms) – hence, such clusters and those of objects that the DNN module is successful in classifying in the meantime, are identified as things to be noted by the computer and manoeuvred around by the vehicle.

“So, we get two different data streams corresponding to our Lidar perception. Then, these two are again merged back together, meaning point clusters matching with the DNN module’s known classes essentially get ignored because they’re accounted for, and clusters that don’t have an applicable class get marked as ‘unknown’, giving us our final Lidar perception model,” Chirakkal explains.

Meanwhile, the 2D images from the five cameras installed about the Leo P6 are stitched together, giving not only a 360° colour view about the vehicle but stereoscopic depth in all places where two or more camera FoVs overlap, thereby providing a certain measure of three-dimensionality of objects around the vehicle relative to the cameras’ known positions and angles.

“At this point, we have 2D camera perception of obstacles and the Lidar perception of obstacles. The next stage is our late fusion technique, which combines both to achieve a very precise, actionable 3D model of the vehicle’s surroundings leveraging the point and colour data simultaneously – to the point that the Leo P6 and our other vehicles can use it for localisation,” Chirakkal continues.

Localisation, evolved

“The fused perception model is fed downstream into our planning module, and we think of that one as the brain of our autonomy because it takes inputs from everything to effectively understand – but the next module that bears explaining is the localisation part of our system, since naturally, understanding precisely where you are in the world, relative to where you want to go, is the other crucial input needed for both planning and control,” Chirakkal says.

“We primarily use Lidar combined with INS data for our localisation, within the usage of a map that is created prior to the Leo P6 engaging in its shuttle service routes in a given location – the Leo P6 matches its real-time GNSS-inertial data, supplemented with Lidar for added certainty, to georeferenced point cloud data in its onboard map data.

“I referred earlier to first-generation autonomous road vehicles and a second generation on its way; what defines that for us is whether the AVs are localising in the conventional way I just described, which we call ‘Autonomy 1.0’, or achieving a map-less localisation, free from the vulnerabilities and limitations of GNSS systems, which we call ‘Autonomy 2.0’.”

In TREEZE’s view, Autonomy 1.0 suffers from significant weaknesses beyond the well-established propensity of GNSS data to be slowed, corrupted or lost owing to issues such as multipathing, jamming, spoofing or ionospheric interference. A primary example of those other weaknesses is that it depends on constant maintenance of commonly used map databases such as HD Maps, which is costly and time-consuming, and hamstrings the ability of AV companies to scale their vehicles and operations.

“We’d like to resolve that – and plan to – but we’re initially focusing on deployment and on building our end-to-end architectures in order to more easily engineer our wider AV ecosystem,” Chirakkal says.

With localisation data, perception data and key traffic details, the vehicle’s computer can compute the geometries and velocities for its ideal next trajectories
(Image courtesy of TREEZE)

For now, TREEZE’s policy upon deploying into a new region is to create a map corresponding to that region. This is done by driving around to gather Lidar and INS data, and fusing the two together to achieve odometry models. With odometry matching every timestamp from every driven route, a point cloud ideal for localisation is created.

“The majority of self-driving vehicle manufacturers request a different mobile mapping system from their actual vehicles to create their map, but we’re able to create the Leo P6’s maps using the Leo P6, with data acquired through manual driving with a handheld remote, which has significantly reduced the resource costs of our map generation,” Chirakkal explains.

The 3D point cloud map does not, however, contain any information on the whereabouts of crosswalks, lane markers, double yellow lines or other safety-critical road details. Hence, the next step is to manually annotate the map – this remains a time-consuming and labour-intensive process and a major motivator for TREEZE to bypass the whole process through Autonomy 2.0.

“Within the process of annotation, you actually draw the lanes, you put in the crosswalks, you annotate where the traffic lights and stop signs are, along with attributes for the lanes and the other roads details, and you link the lanes, you create the junctions and so on,” Chirakkal says.

“When this is done, what you end up with is something called a semantic map. This is the backbone of the control and route planning part because without it, the planning algorithm can’t safely plan the vehicle’s next manoeuvre. So, in localisation, the vehicle computer is comparing the real-time Lidar and INS 3D point map to the pre-generated semantic map in order to matchmake and localise itself within the region.”

In situations where the GNSS suffers a fault or failure, the Lidar odometry continues to operate and localise the vehicle with a high degree of confidence; conversely, in sparse areas with limited physical landmarks for the Lidar to pick up, the GNSS keeps working.

“And right now, our r&d is in a position where we’re about to add in the camera perception layer to our localisation approach,” Chirakkal says. “That would integrate the means to detect and classify lanes, roads, crosswalks and so on in real time. It would be an instanced segmentation that the vehicle would be carrying out and we’re actively training our model to do that at the moment.”

At present, TREEZE’s instanced segmentation work is particularly focused on recognising lanes – including dotted, dashed, solid and other shapes of lanes, as well as differing lane colours including white, blue and (both single and double) yellow lanes.

“You cannot recognise and classify these with a conventional, simple segmentation process where you’re segmenting the pixels in an image; you need instanced segmentation to first understand which is the main lane to adhere to, and then determine whether that can be crossed, or deviated from, thereafter accounting for what influence any additional lanes in the vicinity might also have on the move the vehicle is considering,” Chirakkal continues.

Planning and control

Based on the localisation data, key traffic details in the semantic map (or from the camera data, in future) and the fused perception data tracking obstacles in the surroundings, the planning module will be able to determine ideal subsequent manoeuvres for the Leo P6.

The exact duration, range or interval for which the vehicle computer calculates and re-calculates its trajectory can vary; for example, it can plan the next 10 seconds’ or 50 seconds’ worth of manoeuvres for the vehicle at a time.

The control module generates steering, gear, brake and throttle values, as well as turn and hazard signals that match the manoeuvres to be taken
(Image courtesy of PIX Moving)

“The actual trajectory consists broadly of two parts. One is what we call the geometric path, which accounts for whether the vehicle needs to shift or merge lanes, or needs to make a left turn, a right turn, a U-turn, continue straight ahead or so on,” Chirakkal explains.

The Leo P6’s wheels are driven by two centrally mounted permanent magnet synchronous motors, powered from a 32 kWh lithium iron phosphate battery pack
(Image courtesy of TREEZE)

“The other part is the velocity profile. Given the geometric path, the vehicle needs to output the optimal speed at each timestamp. So, both the geometric path and velocity profile are calculated in the planning module.”

Next, the planning algorithm’s trajectory is fed into the control module, which contains the logic necessary for generating not only steering commands, throttle commands and braking commands but even gear commands, with that latter category being key for manoeuvres requiring reversals such as parking.

The gear commands calculated and output by the control module encompass the gamut of potential gear changing approaches that one sees across different transmission systems throughout the world. In the Leo P6, for instance, as with most robobus- or robotaxi-type vehicles we have featured, there is no tangible mechanical gearbox in the drivetrain; hence, the ‘gearbox’ or gear commands in such vehicles is simply a set of CAN commands that provide the electric powertrain’s inverter system with ideal voltage, current and commutation commands to account for torque and speed variations, as well as reversing if needed. New EV technologies analogous to gear switching, such as ePropelled’s dynamic torque switching, may also be accounted for this way.

“But we also want to be producing retrofitted, older vehicles as AVs in future; and you can’t just take off the gearbox with such vehicles,” Chirakkal says. “We need to be able to run manual, automated manual and automatic transmission vehicles as autonomous, not just for the sake of market coverage but because regulations don’t sanction vehicles where you’ve removed the physical gearbox.”

Fortunately, there are a range of solutions and workarounds companies can take to ensure that their control algorithm can still command such drivetrains. One may, for instance, hack the CAN signals and then give precise, translated CAN commands, in the case of automatic or automated manual systems. Alternately, AV companies can actually provide their own shift-by-wire module, based on their own known CAN library or database, to electromechanically control manual (and automated manual, if necessary) systems.

“The latter is the approach we’re focused on taking, particularly because reversing CAN signals is often borderline or fully illegal – many do it, even though they really shouldn’t – so we actually have a shift-by-wire solution that we can integrate onto vehicles’ gearboxes to control them with our software,” Chirakkal explains.

In addition to calculating steering, gear, brake and throttle values, the control module must also generate turn and hazard signals that exactly match the manoeuvres about to be taken. Once the control algorithm is finished generating all these commands, the computer must communicate those target parameters to the vehicle’s drivetrain and ancillaries.

“That means we needed to create a software package that could translate the commands generated by the control module into CAN commands so that they could be communicated securely over the CAN bus to the by-wire systems – we refer to this as our vehicle interfacing system. It’s quite straightforward but it forms the final core part of our autonomy stack,” Chirakkal says.

Electric powertrain

In addition to commanding the aforementioned by-wire control systems, the VCU and control algorithm send commands to the Leo P6’s electric drivetrain. The vehicle is powered by two permanent magnet synchronous motors (PMSMs) with a combined peak power of 30 kW and torque of 240 Nm, drawing energy from a 31.94 kWh lithium iron phosphate (LiFePO4) battery, by which the range of up to 140 km (depending on conditions) is managed.

Each of the two motors is rated to 5 kW continuous power output and 15 kW peak power, as well as 23.9 Nm of continuous torque up to a 120 Nm maximum. They are also designed for a rated voltage of 307 V, and for IP67 protection in terms of water and dust resistance. Each connects to the main axle via a single speed gearbox with an 8.33 reduction ratio.

Each motor is additionally controlled by its own dedicated inverter, which forms part of the vehicle’s integrated high-voltage system (along with a power distribution unit, an onboard charger, the aforementioned MCU and a DC-DC converter that bridges the high- and low-voltage networks).

These components are centrally mounted and thus protected within the vehicle frame. Overall, while the standard PIX RoboBus comes with four in-wheel motors, “This dual-motor, mid-mounted configuration enhances energy efficiency, reduces mechanical complexity, and offers greater durability and noise isolation compared with in-wheel systems,” Chirakkal says.

The LiFePO4 cathode chemistry was chosen for its high thermal stability, long cycle life and low fire risk, all being desirable in autonomous urban mobility applications. The system also supports overcharge and overdischarge protection, and the pack’s thermal management architecture incorporates a winter heating function alongside its cooling system to maintain performance in cold climates.

The pack is altogether rated for a 104 Ah capacity and a 307.2 V supply output (as one may have assumed from the motors’ voltage). It is centrally mounted under the chassis floor to optimise for weight distribution and safety. Its housing also features added metallic protection at its bottom, as well as being IP67-rated, much like the two PMSMs.

An automotive BMS performs EV-standard health and performance duties, and communicates to the VCU via CAN for real-time monitoring and diagnostics.

V2X connectivity

The original PIX RoboBus comes with an integrated router and radio communication system to support remote drive-by-wire and monitoring of health and telemetry. Said router, installed in the vehicle’s rear compartment, supports 4G/5G IoT SIM cards for cloud connectivity, and an additional remote controller and receiver functioning over the 433 MHz ISM band is included for drive-by-wire control.

Additionally, although not a core part of the original Autoware stack, TREEZE has sought to integrate V2X capability in the Leo P6 Robobus (a logical course of action, given the South Korean government’s emphatic support for V2X), and develop foundations for doing so in its other AVs.

For those unfamiliar, V2X refers to wireless comms between a vehicle and any entity that may interact with it, particularly for the sake of automotive safety, traffic improvements, energy efficiency and pollution reduction. It typically encompasses other well-known mobility connectivity technologies such as vehicle-to-grid and vehicle-to-device, as well as vehicle-to-vehicle and similar vehicle-to-network types of communications.

“To that end, we have developed our own V2X adapter module such that any V2X OBUs [onboard units] that may be developed or released in future can be connected to it, and henceforth integrated into our software stack with minimal configuration steps; doing so was really important for ensuring our software stays scalable and robust,” Chirakkal says.

South Korea’s standards on V2X communication protocols, security and radio frequency spectrum policy are developed and defined by a combination of government ministries and key agencies, but the regulation pertaining to mandatory V2X for autonomous systems may still potentially change.

Therefore, to minimise its odds of having to eventually re-engineer its adapter module, TREEZE has utilised the SAE International’s J2735:202409 standard specifying how a message set for V2X communications, along with its data frames and data elements, should be formed.

“South Korea has essentially taken the SAE protocol for V2X and slightly modified it to suit certain elements of South Korean roads and traffic; we’re working closely with ITSK, the main institution responsible for those modifications, including an r&d project deploying the newest version of the SAE standard,” Chirakkal says.

Alongside continued use of the PIX RoboBus, TREEZE is ordering many other AV units to explore permutations of end-to-end autonomy in varying conditions and use-cases
(Image courtesy of PIX Moving)

“We’re also collaborating with multiple V2X OBU vendors and road-side unit vendors, with monitoring centres collating histories and data on all these messages for analysis. And so, to facilitate these partnerships, we’ve written our own encoding/decoding algorithm for SAE J2735:202409 so that any compliant V2X hardware can communicate with our autonomy stack, and any company wanting to integrate V2X into their own platform is welcome to buy that algorithm from us as a product customised to their specifications.”

Fleet of the future

In addition to its ongoing work on V2X and its dream of map-less autonomy, TREEZE is also working on fleet management systems, not only in terms of software but also in ways to achieve interoperability between AVs of wildly different stripes.

To that end, it has ordered numerous different AVs from the US and elsewhere, partially in order to learn how different vehicles may be integrated together within unified fleets, and thus made able to function reliably and without the need for intensive oversight across varying industries.

“But more than that, everything we do is dedicated to learning, so that we can design within ourselves a great ‘teacher’, not only for teaching the future generation of Autonomy 2.0 vehicles, but also for expanding our range of use-cases,” Chirakkal says.

“From design and engineering through to manufacturing, it really pays to be educated, knowledgeable and well-versed in the diversity of conditions and use-cases that AVs can run into. So, we’ll be deploying lots of different vehicles at once in the months and years ahead, managing their operations and maintenance, and observing what it takes to make that easiest for the end-user; it’s going to be a fascinating learning experience, we’re all certain.”

Key specifications

TREEZE Astreze Leo P6 Robobus

Autonomous shuttle

SAE Level 4 autonomy

Fully electric

Dual central drive motors

Dimensions: 3810 x 2220 x 1960 mm

Cabin height: 1780 mm

Floor height: 360 mm

Seat height: 430 mm

Onboard energy: 31.94 kWh

Charging time: 1.5-5 hours

Maximum speed: 30 kph

Maximum range: 140 km between charges

Turning radius: 4.8 m

Maximum passenger compliment: 6

Some key suppliers

Vehicle platform: PixMoving

Autonomy software: Autoware Universe

Simulations: dSPACE

Simulations: CARLA

Lidars: Ouster (and Seyond and Hesai for future versions)

Cameras: Sensing-World

Stereo cameras: Foresight Automotive

INS: NovAtel

INS: SBG Systems

INS: CHCNav

PC: Neousys

 

UPCOMING EVENTS