The last part is important. Good monitoring environments don't generate too many alerts. In most environments, event logging, when enabled, generates hundreds of thousands to billions of events a day. Not every event is an alert, but an improperly defined environment will generate hundreds to thousands of potential alerts -- so many that they end up becoming noise everyone ignores. Some of the biggest hacks of the past few years involved alerts that were ignored. That's the sign of a poorly designed monitoring environment.
The most secure companies create a comparison matrix of all the logging sources they have and what they alert on. They compare this matrix to their threat list, matching tasks of each threat that can be detected by current logs or configurations. Then they tweak their event logging to close as many gaps as possible.
More important, when an alert is generated, they respond. When I am told a team monitors a particular threat (such as password guessing), I try to set off an alert at a later date to see if the alert is generated and anyone responds. Most of the time they don't. Secure companies have people jumping out of their seats when they get an alert, inquiring to others about what is going on.
Practice accountability and ownership from the get-go
Every object and application should have an owner (or group of owners) who controls its use and is accountable for its existence.
Most objects at your typical company have no owners, and IT can't point to the person who originally asked for the resource, let alone know if it is still needed. In fact, at most companies, the number of groups that have been created is greater than the number of active user accounts. In other words, IT could assign each individual his or her own personal, custom group and the company would have fewer groups to manage than they currently have.
But then, no one knows whether any given group can be removed. They live in fear of deleting any group. After all, what if that group is needed for a critical action and deleting it inadvertently brings down a mission-dependent feature?
Another common example is when, after a successful breach, a company needs to reset all the passwords in the environment. However, you can't do this willy-nilly because some are service accounts attached to applications and require the password to be changed both inside the application and for the service account, if it can be changed at all.
But then no one knows if any given application is in use, if it requires a service account, or if the password can be changed because ownership and accountability weren't established at the outset, and there's no one to ask. In the end, this means the application is left alone because you're far more likely to get fired for causing a critical operational interruption than you are letting a hacker stay around.
Sign up for CIO Asia eNewsletters.