I addressed some of the more significant issues with regards to threat intelligence a few weeks ago, but at least one point I made then bears repeating: If you're not prepared to act, you don't need intelligence. Let me get 'inside baseball' on you for a minute. In the secret intelligence world, the closest analog to today's "cyber threat intelligence" is warning intelligence. There are two categories of warning intelligence: strategic and tactical. At the risk of over-simplifying things: strategic is the intelligence that you get beforesomething goes 'boom;' tactical is the intelligence you get aftersomething goes 'boom.' Or put another way: what's coming at you vs. what's happening to you.
Before "cyber threat intelligence" became a thing commercially, your government had over a decade of experience doing both strategic and tactical warning of cyber threats from both nation-states and non-state actors. I should know: I used to manage said system. Because of that experience I predict with a high degree of confidence that the main problem that today's commercial newcomers are going to experience is one of inaction on the part of their consumers, particularly those who occupy the C-suite.
Warning intelligence attempts to answer two main questions: what'smost likely to happen and what's the most dangerous thing that can happen? The idea being if you're prepared for the worst, anything that falls short should be dealt with handily. The problem of course is that few people think the worst is going to happen to them. A decision-maker may opt toheighten readiness ("Hey everyone, keep your eyes open this week") but take no far-reaching action because the "most likely" scenario is something existing mechanisms and capabilities can address.
But what about the "most dangerous" scenario? Well, what used to happen, with alarming frequency, was that decision-makers would look at their position (Generals or Admirals) and trust in the thought- and decision-making process that got them those stars and say, "What do those nerds know anyway? How could a bunch of hackers cause me any pain and suffering?"
...and then large chunks of .mil would fall victim to an attack and they would scream: "How come you didn't tell me this was going to happen?!"
History is replete with examples of decision-makers not liking or agreeing with flashing red lights in front of them, and then a whole lot of people paying the price for their inaction. A full break-down on the flaws and fallacies people use to justify ignoring intelligence is something I can cover another day, but for now let it suffice that people, being human, will come up with all sorts of reasons to not think clearly when presented with a sufficiently novel scenario. Cyber threat intelligence is not going to escape this fate, and as an intelligence wonk you need to be OK with that.
On the other hand, decision-makers have to understand that "most dangerous" is called that for a reason, and while the probability that it may be realized could be small, it is not zero. If you are presented with a most dangerous scenario you should be making sure that you have the means and methods in place to counter the threat - or respond quickly and comprehensively once the threat is realized. Maybe that means people have to put in extra hours to test your response plans, maybe that means some non-security work has to take a back seat for a bit. Whatever it is, its going to be orders of magnitude cheaper than crossing your fingers. Hope, it has been said, is not a strategy.
Cyber threat intelligence is just one of many things that you can use to help defend your enterprise, but it is not a silver bullet. The vast majority of the time the warnings you receive are going to be busts. You're going to start to think over time that because nothing you have been warned about has ever happened nothing will ever happen. That's the point at which you're going to devalue intelligence and be caught by "surprise." Intelligence will have "failed" you and you will go looking for heads to cut off.
Start by looking in the mirror.