Stay One Step Ahead
Be the first to get updates on everything Exro.
You're in! Keep an eye on your inbox.
Error. Please check the form fields and try again.
Traction inverter design for EVs has evolved due to several technological advances in areas such as semiconductors, cooling techniques, and switches.
In this three-part series, Exro’s Chief Technology Officer, Eric Hustedt, helps us explore what a traction inverter is, how inverters work, EV traction inverter development, and the latest advancements in technology for traction inverter design. This third part of the article focuses on the recent advancements related to electric vehicle inverter design. Specifically, we explore switches in traction inverters, semiconductor advancements, and cooling methods, and other developments that have contributed to the evolution of traction inverter design.
Welcome to part three of our series on inverter technology. In part one, we provided an introduction to inverters and how they work, and in part two, we explored the early advancements in inverter technology and the differences between AC and DC motors. Now, in part three, we will dive deeper into the latest advancements in inverter technology and take a closer look at critical components such as switches, semiconductor advancements, cooling methods, and interconnects.
Since its invention, the fundamental concept behind a three-phase inverter has not changed; however, there have been major advancements in the devices, fabrication techniques, and components used. These advancements have enabled the production of smaller, more affordable, and more powerful inverters. In the following section, we will delve into each of these crucial developments in greater detail.
The switches in an inverter play a crucial role in regulating the flow of electrical energy and converting DC to AC power. They are responsible for switching the current ON and OFF at a rapid rate to create the desired AC waveform. The type, construction, and cooling of the switching elements are arguably the most significant elements of an inverter design.
The switches used in modern inverters must be able to handle high currents sometimes exceeding 500 amps per phase or more. They must also rapidly switch this current on and off with voltages ranging from 400V to 800V DC. This is no small task and requires switches that can handle this level of power without generating excessive heat or voltage.
Effective thermal management is crucial for the proper functioning of the switches. Overheating can cause the switches to degrade over time or fail entirely, which will render the system inoperable. Add the fact that the switches are connected to high-voltage sources, means that the cooling system must also be an electrical insulator. The conflicting requirement of good heat transfer and electrical isolation is an engineering challenge, and the best materials we have for conducting heat are metals such as copper and aluminum, the problem is, they are also very good conductors of electricity.
To put the power requirements of an inverter into perspective, it is helpful to compare them to an average household outlet. Household outlets typically operate at only 15 amps and 120V, or 10 amps and 240V, depending on the region. In contrast, inverters used in electric vehicles and other high-power applications must often handle an order or two magnitudes larger currents and voltages. This highlights the complexity and sophistication required in designing the switches for these applications.
Since the 1980s, MOSFETs have been the preferred device for lower voltage inverters, while IGBTs have been the go-to choice for higher voltages of around 150V or higher. IGBTs remained the top choice in the high voltage market until the mid-to late-2010s when wide band gap semiconductors like silicon carbide (SiC) MOSFETs became commercially viable.
IGBTs are effective switches, but they have certain limitations. While they can switch quickly, compared to devices such as MOSFETs, they still lag behind in terms of switching speed. The issue with slow switching is that when transitioning from 'ON' to 'OFF' or vice versa, the device is exposed to voltage while it is conducting current, generating large power loss in the switch: in other words, it is partially 'ON', which as noted earlier is undesirable. The longer it takes to switch between 'ON' and 'OFF', the more heat is generated in the switch during each switch transition, thus limiting the switching frequency to avoid overheating the device.
Wide band gap semiconductors are materials that require more energy to be applied to them to transform them from insulators to conductors compared to conventional semiconductors such as silicon. This reduces sensitivity to external energy, allowing them to operate at higher voltages, frequencies, and temperatures.
As of 2023, two wide-bandgap semiconductors are commercially available for power devices: silicon carbide (SiC) and gallium nitride (GaN). At present, SiC is leading in terms of cost per ON-resistance and are available with higher voltage capability; as such, it has become the dominant choice for inverter power semiconductors.
Power semiconductor switches, regardless of type, generate heat when operating, and how well they can be cooled determines how much silicon area is required for a given application. Silicon area is directly proportional to the cost of the switch, therefore improving the cooling methods has a direct cost benefit.
The heat generated in the switches is a result of two main factors: conduction loss and the already discussed switching loss. Conduction loss is the heat generated by the movement of electric current. An example of this can be seen when a coiled extension cord is connected to a high-powered device, such as a space heater, causing the cord to become warm. With the exception of superconductors, conduction loss occurs in all materials through which current flows, including power semiconductors, bus bars, and power delivery cables.
As power semiconductors become more compact and high-powered, the challenge lies in effectively removing heat from these smaller devices. Additionally, the cooling solution must provide electrical isolation, as the chips within them operate at hundreds of volts and are "live", while the cooling system is typically made of metal and connected to the chassis/ground.
Finally, reliability is also a significant challenge. The frequent and large temperature changes in these small and complex mechanical structures pose a significant material science challenge. The utilization of ceramics in electronics has a rich history, as they offer relatively high thermal conductivity compared to other insulators and have very good electrical insulation properties. Direct Bond Copper (DBC), a commonly used ceramic substrate in power modules, is created by bonding thin copper sheets to one or both sides of a ceramic substrate through a high-temperature oxidation process. Afterward, these DBCs are chemically etched to produce the necessary electrical connections, similar to the process used to produce printed circuit boards.
Various ceramics can be utilized, each providing different levels of performance and cost. Alumina (aluminum oxide) is frequently used and is the most economical. Other ceramic materials such as Aluminium Nitride (AlN) and Silicon Nitride (SiN) or Zirconium-doped Alumina, offer improved thermal performance, enhanced mechanical strength, or both, with a corresponding increase in cost.
An alternative method of substrate construction is called Active Metal Brazing (AMB), which bonds the metal to the ceramic using a high-temperature soldering process instead of an oxidation process. In this process, copper is soldered to the ceramic to create the substrate.
The use of these substrates in power electronics presents a challenge due to the differences in the coefficients of thermal expansion between the metal layers (copper) and the ceramics. When temperatures change, the materials expand or contract differently, putting mechanical stress on the bond between the ceramic and metal. This is where the difference between DBC and AMB arises. The bond between the copper and ceramic in AMB is stronger and can withstand more thermal cycles or larger thermal changes before the ceramic and metal separate, a phenomenon known as "delamination". If delamination occurs, the thermal connection between the copper and ceramic is lost, leading to overheating and ultimately device failure.
Once an appropriate substrate has been selected, the next step is to attach the semiconductor to the substrate and the substrate to the cooling system. The process of connecting the semiconductor to the substrate, also known as "die attach," was traditionally done using solder. Although solder has a reasonable thermal conductivity, it is not very mechanically durable and repeated mechanical stress must be limited to prevent fatigue failure.
Mechanical fatigue is a phenomenon that arises when a material is subjected to repeated loads, resulting in the gradual development and spread of cracks over time. In this case, a single application of the load will not cause immediate failure, but repeated application and removal of the load will eventually lead to failure. For instance, bending a wire once will not break it, but if this wire is repeatedly bent and straightened, it will eventually fail due to fatigue at the bend point.
The fatigue life characteristics of different materials vary, this characteristic is generally presented in a chart indicating the peak load and the number of cycles before failure occurs. Understanding the behavior of solder materials under cyclic strain is complex, as factors such as relaxation time significantly impact reliability.
In general, it is accepted that solder is not a material suitable for joints subjected to large mechanical loads, leading to extensive research on alternative methods for die attach. One of the more robust alternatives is sintering, which involves a complex process of forming a solid mass from a powder mixture of metals using pressure and temperature, but without melting. Forming a hard snowball from loose snow by pressing it together in your hands is a form of sintering. Sintering is also commonly used in the manufacture of ceramics, powder metal parts, and some plastics.
The sintering method for die attachment typically uses a paste made from a mixture of silver and copper powder, along with an organic filler material, which evaporates during the sintering process. The die is placed on the mixture and subjected to pressure and heating in a controlled environment to produce the final joint.
The sintered joint that results has greatly improved thermal conductivity, often three times greater than that of solder, and a significantly higher melting point, typically over 960°C, compared to the melting point of solder which is around 220°C. The most notable advantage of the sintered joint is its capability to withstand cyclic mechanical stress, making it ideal for use in power modules where the die-to-substrate joint is subjected to cyclic thermal stress driven by the different thermal expansion of the materials.
The sintering process for die attachment is significantly more expensive than soldering due to the cost of the materials involved and the additional process steps. The most sophisticated cooling technologies, including ceramics, sintering, and related procedures, can cost more than the semiconductor being cooled. Currently, there is ongoing research to find ways to reduce the cost of sintering, such as developing sinter pastes with reduced or no silver content. For instance, pure copper sintering is being explored as a cost-effective alternative.
Having connected the semiconductor chips, or die, to an electrically insulating substrate, the next step is to remove the heat from the system by connecting the substrate to a fluid cooler. A common method of doing this is by soldering the substrate to a heat exchanger, as both sides of the substrate are equipped with a metal layer. Other options for attachment include mechanical clamping and the use of thermal paste or thermal adhesives.
Sintering is also gaining traction for high-performance power modules for attaching the substrate to the cold plate, however since the involved areas are much larger than the semiconductor die, the cost is significant. Often the cold plate is pre-attached to the substrate by the power module manufacturer to ensure optimal thermal performance.
This construction method, sintering the die to a DBC or AMB substrate and then to a pin fin cooler, is providing the best thermal performance and reliability from the semiconductor junction to the cooling fluid. Nonetheless, there are other system-level factors to consider, and various approaches being taken by other organizations, but they are outside the scope of this article.
Semiconductor switches, including MOSFET (Si or SiC), transistors, and IGBTs, are all three-terminal devices. Two of the terminals are the power connections, one for each side of the switch, and the third is the control input. The control input is connected to a small signal, usually, a voltage or current, which is used to turn the switch ‘ON’ or ‘OFF’. In MOSFETs and IGBTs, this control input is known as the gate, while in bipolar transistors, it is called the base.
The bottom of the die is either soldered or sintered to the copper layer of the substrate, which then also acts as the Drain connection or the collector in the case of an IGBT. The remaining two connections, the gate, and source (or gate and emitter for IGBT), are established on the top of the die.
The most commonly used method for creating these top-side connections is through the use of a wire bonder. This machine uses very fine, pure aluminum wires to "stitch" together the tops of the die and other traces on the substrate or bus bars to form all the electrical connections.
The wires used for these connections are incredibly thin, with a typical diameter of 0.125mm for the gate connection and 0.5mm for the power or source connections. For larger bonds, such as those found in power devices, pure aluminum is typically used, while for smaller devices like microprocessors, even thinner wires made from gold are employed due to their superior electrical conductivity and ease of bonding.
Although aluminum wire bonding is still the preferred method for the majority of semiconductors, the latest power semiconductor die has surpassed its capabilities in power module applications. Alternative methods of creating top-side connections that can carry larger currents and offer better reliability have been the focus of ongoing research. While there are many ideas on how to electrically connect devices, the real challenge lies in developing the material science and manufacturing processes that make these ideas feasible.
A MOSFET or IGBT chip is produced by slicing a thin piece of silicon, typically with a thickness of 200um or 0.2mm, similar thickness to a thin carboard. The desired shapes for the circuits are then "printed" onto the top of the semiconductor material through various lithography steps, whether it be for a computer CPU or a large switch. The fundamental concept remains the same.
The printed structures are located within the top few micrometers of the silicon wafer and, including the metal coating that connects the underlying semiconductor circuits, they are approximately 10-15um thick. This means that all of the activity in a semiconductor takes place in the top 1% of its surface, with the rest serving merely as an electrical conductor.
The extreme thinness of this "active layer" makes it highly susceptible to damage, particularly when connecting large pieces of aluminum or copper to it. The 0.5mm bond wire is over 200 times thicker than the active silicon layer. As a result, achieving the necessary material science and process development to securely attach solid copper clips to the top of the die has required extensive research into how to do so repeatedly without damaging the device. This process must be capable of being able to be performed millions of times per year with a high yield.
Recent advancements in top connection technology have paved the way for various new approaches to connecting high-power connections to the top of the die. However, the temperature is a crucial factor to consider. While most commercial chips have a maximum semiconductor junction temperature of 150°C, high power and automotive applications extend this limit to 175°C or even 200°C. These upper limits are not determined by the semiconductors themselves, but rather by how they are constructed into power devices and modules.
The melting point of SAC305, a widely used solder in electronics production, including die attachment, is 217C and it fully melts at 220C. However, it begins to soften well before reaching its melting point, thus restricting its practical upper limit to around 190C. On the other hand, the typical black epoxy plastic used to encase semiconductors has an operating temperature ceiling of around 200C.
The maximum temperature at which a semiconductor can operate is determined by its band gap energy, which is the amount of energy required to alter its behavior as a semiconductor. For example, silicon has a band gap energy of 1.1 electron volts (eV), allowing it to theoretically operate at up to 290°C. On the other hand, a wide band gap device made of silicon carbide (SiC) has a band gap energy of 3.3 eV, putting its theoretical upper-temperature limit at over 1000°C. However, while the theoretical limits of silicon and gallium have been proven experimentally in laboratory settings, the upper limit of SiC has not been confirmed. Nonetheless, operation at over 300°C has been demonstrated.
The practical upper operating temperature of semiconductors is not limited by the semiconductor chip itself. Instead, it is constrained by the materials used in the modules and devices housing them, such as the solder and epoxy plastic. For example, as mentioned above, SAC305 solder has an upper operating temperature of 190°C. The black epoxy plastic used to encase semiconductors also has an upper operating temperature of around 200°C. So, we are now faced with a semiconductor that can run much hotter than the materials that are used to mount, connect, and encase it.
In the last three sections, we have delved into the advancements that have propelled inverter technology to its current state. As we ponder the possible future advancements of this technology, it becomes clear that the question is philosophical as much as it is technological. To envision where inverter technology might be headed, we need to step back and take a broader, 30,000-foot view of the field.
When considering potential future improvements, several possibilities come to mind: increased inverter efficiency, reduced size, lower cost, or even integration into motor or axle housings. However, it's important to recognize that these improvements do not fundamentally alter the way inverters function or how they interact with the electric motor.
One key area of focus in inverter technology has been efficiency. A well-designed inverter today can achieve impressive efficiencies of 98-99%. While striving for even higher efficiency is certainly commendable, we must acknowledge that, at a macro level, efficiency improvements will ultimately be constrained by the 100% physical limit. Consequently, any future advancements in this area can only be incremental and minor.
Furthermore, it is worth noting that, with inverters already operating at such high-efficiency levels, the majority of system losses stem from the machine itself – an aspect that three-phase inverters have no direct influence over.
In terms of size and weight, modern inverters are comparatively compact and lightweight, particularly when contrasted with the overall vehicle mass. An inverter currently occupying a 5-liter volume and weighing 10kg would have a minimal impact on a typical EV's weight if it were reduced to a 1-liter volume and 2kg weight, given that EV curb weights often exceed 2000kg. For commercial vehicles with substantially higher curb weights, the benefits of reducing inverter weight become even less significant. While reducing size and weight is important, it is essential to remember that these improvements will yield diminishing returns.
The last area is cost, which encompasses a broad range of topics, material costs, design for manufacturing, use of automation, etc. When considering cost, the market must also be considered, as an economy car and a high-performance vehicle will have different cost criteria. Cost, in its many forms, whether that is money or energy, will always be an important driver of innovation.
As we look to the future, it becomes apparent that the humble 3-phase inverter is almost perfected and the next stage of motor drive development may not lie in the continued incremental improvement of this architecture, but rather in refining and optimizing its integration with other components of the system and rethinking how we drive electric machines. By approaching the future of inverter technology from this broader perspective, we open up new avenues for advancements and foster a deeper understanding of how this critical technology will continue to evolve in the years to come.
Exro, a leading technology company specializing in power electronics, is addressing the exciting possibilities of where the market is heading next. Exro has developed a novel topology for traction inverters, Coil Driver™ technology, which deeply integrates power electronic components into the electric motor, essentially allowing the inverter to control individual motor coils.
By controlling motor coils through the power electronics, this technology can dynamically change motor coil configurations while the motor is in operation. This means that the Coil Driver can optimize motor torque, power, and efficiency on the fly. When combined with advanced control algorithms and techniques, this new category of traction inverter can optimize performance and efficiency from a motor and inverter perspective, offering a level of enhancement that is not possible with standard 3-phase inverters.
The Exro traction inverter, the Coil Driver™ uses patented coil switching technology to optimize the powertrain. Without interruptions, Coil Driver™ can seamlessly switch between series and parallel modes, one optimized for torque and the other for power and efficiency at speed. The ability to change configurations allows efficiency optimization for each operating mode, resulting in smarter energy consumption. Its Series mode provides exceptional low-end torque, ideal for acceleration and gradeability, while its Parallel mode provides greater efficiency, torque, and power output at high speeds.
The innovative technologies being developed at Exro pave the way for lighter, more compact, and more efficient electric and hybrid vehicles. This accelerates the transition to an electrified and sustainable world while providing the industry with the next leap forward in traction inverter technology and design.
In conclusion, traction inverters have evolved significantly since the early days of electric vehicles. From their rudimentary, inefficient beginnings, today's traction inverters are highly advanced and sophisticated devices that play an essential role in the performance and efficiency of electric and hybrid electric vehicles. However, with inverter optimizations nearing their limits, it is time for a fresh perspective in the industry and the introduction of a new class of technologies that push the boundaries of efficiency and performance.
If you are interested in learning more about Exro’s adaptive Coil Driver™ technology, please visit the following page: