top of page
Company Logo copy.png

UK Def Standard EMC Testing: Conducted Emissions Set-ups, Pitfalls and Troubleshooting

  • 7 days ago
  • 4 min read

I recently spent several weeks supporting a client with a military/defence product. Due to confidentiality, I cannot share details of the product itself, but the testing was carried out at the Faraday Test Centre at BAE Systems.


It is worth noting that the service at the test centre was excellent. The technical personnel (Ken and David) have many years of experience testing against MIL, DEF STAN, and DO-160 standards, and the test engineer, Steve, provided very effective support during the test set-up.


In this article, I would like to share some observations on conducted emissions testing, focusing on practical pitfalls and lessons learned.


Why Defence Conducted Emissions Testing is Different


As the name suggests, DCE01 (UK Defence Standard Conducted Emissions Test 01) evaluates conducted emissions of power supply lines from a DUT. However, both the frequency range and cable configurations differ significantly from other commonly used military standards.


In addition, for conducted susceptibility testing (DCS06), Def Stan 59-411 requires that conducted emissions on the power lines are measured both before and after the transient test. This is to ensure that the power filter has not been degraded or damaged during the harsh susceptiblity testing.


A Fundamental Question on the Test Set-up


The conducted emissions test set-up defined in UK defence standards raises an interesting technical question.


My colleague in the U.S., Ken Javor, highlighted this in his article “Line Impedance Stabilisation is in its Seventieth Year and Still Going Strong”. He noted:

But look at specifications such as RTCA/DO-160 and DEF STAN 59-411, with 400 MHz LISNs and 100 MHz conducted emission control. A one-meter-long power lead is a third wavelength at 100 MHz. And for CISPR 25, using a two-meter-long power wire, the LISN is over a half-wavelength from the test sample. All the work and expense that went into the extended frequency range LISN is wasted when the parasitics controlled within the LISN is simply migrated to the LISN – test sample interconnection.

In simple terms, the purpose of a LISN is to provide a known and controlled impedance, representing the impedance of the cable between the DUT and the circuit breaker box. However, when long cables are used—especially at higher frequencies—they behave as transmission lines, introducing standing waves and impedance variations.


This leads to an important question:If the cable impedance is no longer controlled, does this not undermine the purpose of the LISN?


In UK defence testing, cable lengths can be several metres. This may be acceptable at lower frequencies (a few MHz), but when testing extends into tens or hundreds of MHz, standing wave effects become significant.


There are undoubtedly historical and practical reasons behind this methodology, and it would be interesting to hear perspectives from engineers who have worked extensively with these standards.


A Practical Pitfall: Pre/Post DCS06 Comparison


As mentioned earlier, Def Stan 59-411 requires conducted emissions to be measured before and after the DCS06 transient test.


This means the test set-up must be identical in both cases. Any variation can lead to misleading results.


In the following example, a broadband noise increase was observed after the susceptibility test. At first glance, this suggested that the DUT had failed DCS06, possibly due to damage to the power filter.


The standard itself does not define a strict pass/fail criterion, only stating that the difference must be “significant.” In practice, guidance from the UK EMC Test Laboratory Association (EMCTLA) suggests a threshold of approximately 8 dB.


Based on this, the increase observed between 200 kHz and 5 MHz would be considered significant, leading to the conclusion that the unit had failed.


However, as design engineers, we should pause and ask:Is this conclusion physically reasonable?Would a damaged filter really produce this type of broadband increase?


Root Cause: Not the DUT

From experience, such conclusions are often made too quickly.


The first instinct is to check cable layout and routing. However, cable changes typically result in resonance shifts, not broadband noise increases.


In this case, the root cause was unexpected:It was not the DUT inside the chamber, but supporting equipment located outside the chamber.


As we found out after hours of investigation, before the DCS06 test, the supporting equipment was running on battery power. After the test, the battery was depleted, and a charger was connected. This change went unnoticed and introduced additional noise into the system. This is illustrated below:



How Did the Noise Enter the Chamber?


This raises an interesting EMC question.

Noise currents must flow in a loop—so how did noise from outside the chamber couple into the DUT?


The most likely explanation is through the coaxial cable connecting the supporting equipment to the DUT. Noise from the charger may have coupled onto the cable (either the shield or centre conductor, but I suspect in this case, it was the centre conductor ), entered the DUT, and then coupled onto the power lines. This noise is either through the ground lead of the linear power supply or somewhere else (it would be an interesting test to swap the linear power supply with a battery).


Due to time constraints, this was not fully investigated, but the mechanism is consistent with typical coupling behaviour observed in EMC systems.


With battery powered supporting equipment, when the DCE01 test was repeated, the result matched the results obtained pre-DCS06 test. So the DUT passed the DCS06 test.


Key Lesson Learned


The key takeaway is simple but often overlooked:

In EMC testing, what happens outside the chamber can be just as important as what happens inside.

Even experienced engineers can miss these details. Small changes in supporting equipment, grounding, or power sources can significantly affect test results.


Conclusion


UK defence conducted emissions testing presents unique challenges, particularly in terms of set-up and interpretation of results. Engineers must not only understand the standards but also critically evaluate whether the observed results make physical sense.


Careful control of the entire test environment—including supporting equipment—is essential to avoid misleading conclusions and unnecessary redesign efforts.

Comments


bottom of page