Threat Image Projection and Computer Based Training: effective training tools

Threat Image Projection and Computer Based Training: effective training tools

Threat image projection (TIP) and X-ray computer-based training (CBT) have been around for a while and are used to measure the effectiveness of X-ray operators at detecting threats. In some places in the world they are used to certify X-ray operators and are actually ‘required’ before a security officer can work at a screening checkpoint. There have been many technical papers done on the effectiveness of TIP and CBT and all of them claim that they are effective. However, when you talk to the subject matter experts (SME’s) in the field many do not agree on the overall effectiveness of TIP and CBT as a training tool. John D. Howell examines the causes of their scepticism.

I asked an expert who runs the world’s largest and most comprehensive penetration testing programme about TIP and CBT. His response:

TIP/CBT doesn’t accurately represent threats in the environment that an X-ray operator might encounter during operational screening. Right now, it is similar to standardised tests in public schools that are made for the administrators but not the students.

So, I reached out to other SMEs and the general response was the same. The below best sums up what I got back from others I asked:

TIP is an excellent tool for screeners to find TIPs and that is partly because of the objects that are being used to make the TIPs. With CBT, I have yet to find a screener who has actually improved his/her skills with CBT as it is currently being used. In my opinion the most valuable training is hands-on with real objects to get a real understanding of what an IED is composed of and how the different components look like separated, both in X-rays and live. Since I started using the realistic IEDs, first in penetration tests and after the test showing the threat to the screener, I have seen more improvement than years of CBT and TIP.

So why are TIP and CBT not considered ‘realistic’? To really answer that question, one must first understand how the TIP and CBT programmes work and how the threat images are being created for each library. I make TIP libraries and have seen X-ray vendors’ TIP libraries. This includes the bag sets (clean bags) that are used for CBT and the operator training systems (simulators) built into the X-rays. As a bomb technician, I know improvised explosive devices (IED’S) and, being from America, I know guns and knives! The core problems for CBT are:

1. Fictional Threat Images (FTI’s) are low quality and are unrealistic
2. Limited number FTI categories based on all the different types and configurations of threats
3. Limited number FTI angles (typically four) do not represent real world complexity
4. Bag sets have no false alarms for almost all CBT systems on the market
5. Bag sets not categorised by the amount of clutter and how they can affect FTI, nor do they have the ability to alarm or not alarm based on FTI placement in the bag
6. No automatic explosive detection or ‘missed detections’ for explosives on almost all CBT systems on the market
7. Explosive detection ‘windows’ not accurately represented in CBT system based on end-user settings (threat mass or size cut-offs)
8. No real quality standards or oversight for libraries
9. FTI difficulty levels poorly standardised in analytics and reporting
10. Bag sets not representative of country nor checkpoint type
11. Virtual keyboards not the same as actual button pushing
12. Image and programmable key settings generic and not per end-users SOPs

With TIP, the problems are similar to those with CBT: FTI quality/realism; limited FTI categories; limited FTI angles; unrealistic automatic detection or missed detection’s for explosives; no real quality standards or oversight; and, FTI difficulty poorly standardised.

FTI Quality/Realism is Poor (Guns)

FTI Quality/Realism is Poor (Guns)

When you look at the gun’s images in a threat library you will see that depending on where it was made (US vs all others) the quality of the guns becomes much lower outside of the US. The reason is obvious; getting access to guns overseas to use in a threat library is much more challenging than in the US. X-ray vendors and CBT companies use anything they can get, and you end up seeing many toys guns, BB guns, pellet C02 guns, and air soft pistols in the threat libraries. Toy guns and the BB/pellet guns are an issue and need to be located in a bag, but they do not appear on X-ray like real guns.

When you look at toy/BB/pellet guns next to real guns, they look nothing alike in X-ray and the density and Zeff of the metal in a real gun is much higher than the toys. Toy guns need to be part of an FTI library, but they need to be in a separate category because of how different they are in comparison to a real weapon. The below image illustrates the difference between toy guns and a real gun.

real gun vs toy

The use of high-density automatic detection at checkpoints is becoming more common; because real guns have a much higher density and Zeff than toy guns, the high-density automatic detection will respond differently. The high-density feature works by selecting the maximum amount of absorption based on a % and a squared surface area of pixels. I have yet to find any CBT that incorporates this into the software and FTI’s. As you can see below, high-density alert, when set correctly, is very effective at detecting a real gun threat but with toy guns it would not detect these items. This is another perfect example of why gun FTI’s are not realistic.

This is another perfect example of why gun FTI's are not realistic.

Guns can be presented to an X-ray operator in several different configurations and each configuration looks different. Your TIP and CBT gun threat images must cover all these different configurations, or they are not realistic. The below are the different configurations that a single gun can be presented to an X-ray operator.


different types of guns

FTI Quality/Realism is Poor (IED’s)

When you start looking at the IED images that are being used in most TIP and CBT libraries, it is obvious to a bomb technician that whoever is making them is not a bomb technician! You will typically see massive amounts of IED FTI’s that are not even close to what the bad guys are using nor are they even close to being technically correct based on circuit design; the detonators not being X-ray correct, nor inserted into the explosives, and the actual explosives simulants not being density or Zeff correct. There are no standards in place for explosive simulants. The below are some IED FTI’s that are perfect examples of the typical quality of images you find in a CBT or TIP library.

Examples of IED Tips

In the below image, the FTI is supposed to be a 1lb TNT demolition block but when you compare it to a real one they look nothing alike. This type of issue is very common in IED FTI’s. The real TNT demo block also generated a ‘red box’ automatic detection and you can see from the colour that the simulated TNT is a much lower Zeff and density.


Another very common issue with IED FTI’s are the blasting caps/detonators. Many libraries are just using empty tubes with a wire tied in a knot inside which looks nothing like a real detonator. You will also see entire libraries that use maybe two to three different types of detonators in the entire library; there are many different types on the commercial market and each type looks different in an X-ray. When you add the improvised detonators that terrorists like to use, the number of different types can be over 30 different configurations. The below images are very common; the detonators are typically never inserted into the explosives and are just stuck to the outside.

Detonator not in the explosive

One of the biggest realism issues with IED’s is that TIP and CBT do not accurately capture how explosives respond when automatic detection is being used. Most CBT systems on the market today do not even have the explosive auto-detection built into their systems. The other issue is that if they are showing auto-detection of the FTIs, the software cannot consider the amount of clutter in the bag. Explosives surrounded by higher Zeff materials typically will not generate an automatic detection alarm and TIP and CBT cannot currently simulate this level of realism.

Automatic detection

The next huge problem with IED’s and lack of realism with automatic explosive detection is how ‘Threat Mass’ detection algorithms are used. CBT vendors may not know what these ranges are so anything that they develop will not be able to simulate how threat mass effects detection. I think threat mass is a bad concept but if it is used and not incorporated into TIP and CBT, they will never be realistic. The proof on just this issue alone is when you see high TIP and CBT scores, but the penetration testing scores are low. The highest % of missed detections lead directly back to ‘Threat Mass’.

CBT and TIP IED threat images rarely incorporate the low destiny range explosives. The world seems to think everything that is explosive has a density at 1.4 g/cc and above (TNT, C-4, Semtex, etc.). This is not true.

Homemade explosives (HMEs), TATP, HMTD, PETN, AN, ANFO, AN/AL, UREA, chlorates, double base smokeless powder, single base smokeless powder, black powder, black powder replacement, nitro-methane, AN + NM, etc.

When you research all the different explosives that are on the market today, along with the HMEs, you will find that the clear majority fall into the 1.2 g/cc and below range. When you look at what is being used in TIP and CBT libraries, they are almost always high-density explosive simulants.


The next issue with CBT and TIP IED threat images are the circuits that are being used. This has a massive effect on the realism of TIP and CBT programmes and until such time as the industry starts building the IED threats based on actual terrorist tactics and techniques, you are going to have IED’s that are not realistic.

IEDs can come in three very different and distinct configurations. If you do not have these IED sub-categories in your FTI’s, you are not providing the end-user a realistic view of the IED threat. This complexity and difficulty breakdown in a sub-menu will allow you to better track performance based on the levels of difficulty IED’s are to identify. One-size-fits-all categories are not effective nor are they realistic.

different types of IED

FTI Quality/Realism is Poor (not enough angles)

TIP and CBT threat libraries generally do not have multiple angles of each threat object. Most ‘might’ have 2-4 angles for each threat (some only one) and this is just not realistic. When you run a threat object through an X-ray machine at every angle possible it becomes obvious that 1-4 angles are not enough. If an operator is only trained on threats where the angle would be considered easy to identify the threat, the X-ray operator is not going to be adequately prepared when they are presented with a real threat at a hard-to-identify angle. As a bare minimum, there should be eight angles for each threat object and it could be argued even more.

Pistol views

Even when you look at IEDs, you can see that the number of angles you can potentially scan a threat from will change the overall complexity and difficulty at detecting the threat. Unless the screener is exposed to all the different angles, they will not be properly prepared to identify the threat in a bag. The below image is an example of the same IED X-rayed at 16 different angles. Each one of the images below are different and some of them are drastically different.

Bag Sets for CBT Quality/Realism is Poor

When I was working at the U.S. Marshals Service, and we started having the Court Security Officers undergo CBT, we found that all the bags in the session bag files were airport bags and, making it even worse, all the bags were from Europe. The odd power cords in the bags, and numerous other items, almost made using the system ineffective. Marshals deal with people who are coming to a courthouse and not somebody getting on a plane. We were able to fix this for the U.S. Marshals Service by taking bag file images from the history files (online recording) where an X-ray unit had been running at one of their courthouses and we used these bags to replace the European airline-type bags; the realism when using the operator training system improved dramatically. Even at airports, the contents of bags at African airports differ from those at US airports – operators need to be trained on baggage images that reflect their operating environment.


Number of TIP images in Current Library Standard Size is Too Small

Most of the current standards and/or models for the size of a TIP or CBT library are very small once you look at how many different types of threats that are out there and the number of orientations required. A common number that you will see is 1000 to 1500 threats in a library and if they run each of those threats at four different orientations you are looking at a total of 6000 images. That might sound like a large number of images but, when you look at numbers more closely, you will find that a 6000 image TIP library only covers a small segment of all the different threats a screener could encounter. I put together a set of circuits that covered all the different ways an IED could be set off and was able to construct 125 different IED circuits that were either electrically of non-electrically initiated. I then collected commercial, military, and home-made explosives (HMEs) simulants and packaged them in ½, 1, 1.5, and 2-pound configurations. I ended up with over 200 different explosive combinations that could be married up to the 125 different IED circuits.

If I were to attach each IED circuit to each different explosive once in a holistic configuration and once in a component-based configuration you would end up with 50,000 different explosive and circuit combinations. Now we must add the total number of orientations for each IED and the minimum ideal number would be eight orientations per threat object. That would be 400,000 different images for just 125 different IED circuits and 200 different explosives.

When you compare 400,000 IED images to a standard 6000 image library you can see that realistically we are only really exposing the X-ray operator to a very small sample of all the potential threats. When you use a 6000-image library to evaluate a screener you are only evaluating them on that specific library and those threats in the library. The proof of this is when you see penetration testing scores that are much lower than the TIP and CBT scores. The penetration test is exposing them to a threat that the screener has never seen before.

Performance Data is Inefficient and Unrealistic

When you look at how TIP and CBT systems measure a screener’s performance, the current model needs to be improved. One of the biggest problems is how the systems categorise the difficulty levels of detecting threats in their many different configurations. This complexity is not accurately broken down in the basic model used for TIP and CBT scoring and the downloadable reports. It is possible to develop a more comprehensive breakdown of the threats and bags based on varying factors of complexity. To accomplish this, you have to create a standard model on how each threat category and bag complexity will be measured. This will result in a report that would provide a more detailed view of the performance of each screener based on the difficulty of the threat and how it was presented. A one-size-fits-all approach that is currently being used is not an accurate assessment of screener performance. This detailed break-down will also help identify areas where follow up training needs to focus.

Categorising Session Bags Based on Amount of Clutter

bag clutter

The amount of clutter in a bag plays a huge role in a screeners ability to detect a threat. The amount of bag clutter can also play havoc with any auto-detection features that are being used, especially explosive detection. Bags must be measured for clutter and be given a level of difficulty (e.g. LV 1-3). One method to accomplish this is by simply measuring the number of pixels that are in the higher Zeff ranges (11 and up). The above image is an example of how this could be accomplished, and each bag is given a level from 1 through 3 of difficulty.

Categorising IED’s Based on the
Way they are Constructed

better categorise the threats based on the level of difficulty

The next issue is to better categorise the threats based on the level of difficulty they can present to a screener. In the example above, we have taken the category ‘IEDs’ and added sub-categories solely based on the level of difficulty they can give the operator. Level 1 is an IED threat that is displayed in a component-based configuration and would normally be easy for an X-ray operator to find. Level 2 becomes more challenging in a holistic layout as the IED components become more difficult to identify. The last level is the most challenging for an X-ray operator. This is an explosive device that has been hidden inside of an object.

Categorising IED’s/Gun’s/Knives/Other Threats Based on the Angle they are Presented

>Categorising IED’s/Gun’s/Knives/Other Threats Based on the Angle they are Presented

When a CBT or TIP library is created, there is currently no requirement or standard to categorise a threat based on how hard the angle is for the item. When you scan threats at more than one angle, we have already shown that many of the angles are much more difficult to identify versus others. These differences need to be identified and categorised in the threat library and software. The entire concept behind using TIP and CBT is to measure performance and identify the effectiveness of training. You cannot do this accurately if you do not have a detailed breakdown of what the screener is detecting and/or not detecting. In the example above, we have set three levels of difficulty for each threat based on the complexity of the angle. Angles that allow easy identification of the threat would be categorised as ‘easy’ and as the angle become more challenging the hardest angle would be categorised as ‘hard’.

Categorising IED’s (and Guns) Based on the Presence of a Auto-Detection (or Not)

Categorising IED’s (and Guns) Based on the Presence of a Auto-Detection (or Not)

Many studies have been done proving the effectiveness of using automatic detection, but most CBT systems do not incorporate this capability. When they do incorporate it, they typically have everything alarm, which is not realistic. When a TIP or CBT threat image is captured it is normally done by placing the threat onto the belt of the X-ray and running it by itself. In this configuration the X-ray will have a higher percentage of auto-detecting the threat because the explosive is not being affected by the bag clutter. The reality is that even though auto-detection is effective it does not always work, and explosive threats can be missed by the system. When you add threat mass to this equation, the amount of potential missed detections increases. To make TIP and CBT more realistic each threat object should be scanned with the auto-detection on and off. This will allow you to capture the threat object in both potential configurations the operator could potentially see the item in a bag. To better score performance, the presence of an auto-detection would make the threat an ‘easy’ level category and the absence of any auto-detection would be categorised as ‘hard’. Exposing your screeners to both scenarios is more realistic.


Can TIP become a more realistic tool? Yes, but we need to harness actual experience from the field and understand the evolving threats we face. As with any technology, we must recognise the limitations of CBT, and strive to rectify the flaws that exist. This article aims to provide a catalyst for such improvement by a realistic assessment of the current state-of-play.

John Howell is an EOD consultant and the Business Development Consultant for 3DX-RAY Limited. John was a Physical Security Specialist and EOD Technician with the Department of Justice U.S. Marshals Service. From 2007 until 2011, he served as an EOD Technician with the U.S. Army National Guard, and from 1999 to 2007 with the U.S. Department of State Diplomatic Security, where he was the lead instructor for the Diplomatic Security Explosive Countermeasures Unit. He served with the U.S. Marines for 12 years (1987 – 1999), which included a tour of duty in the Persian Gulf during the Gulf War.