Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Fatal distraction: 7 IT mistakes that will get you fired

Dan Tynan | Sept. 10, 2013
True tales of IT pros who screwed up big and got fired quick.

"Sometimes rightfully so," he adds. "Often high-level admins need to get to sites that would normally be blocked in order to do their jobs. But that doesn't mean they shouldn't at least be monitored. Even good people end up doing things they normally wouldn't when they think no one's watching. If the admin knows he's being watched, that would eliminate a significant portion of this behavior."

Moral of the story: Some things are better done at home.

Fatal IT mistake No. 5: Keeping the wrong secrets
Until recently, Dana B. was a network engineer at a major U.S. Internet provider. One day, a former colleague was told to change the IP addresses on some production routers. Because these changes could impact Internet subscribers, taking them briefly offline, the ISP typically made such changes overnight.

But this engineer didn't like to stay late, so he changed the addresses at the end of the day before he went home, then turned off his phone so that nobody would disturb him during his off-hours.

That was his first mistake. His bigger mistake was that he consistently refused to document anything he'd done, says Dana. That meant he had no idea which IP addresses he had already used in the past — and neither did anyone else.

After he left, the interfaces failed to come up because their IP addresses had already been used, leaving nearly 5,000 subscribers without Internet access. When other engineers tried to call him to figure out what went wrong, they couldn't reach him.

"It took a team of five network engineers several hours to find the issue and correct the problems," says Dana. "The next day he came in and was promptly walked out."

Moral of the story: Some secrets are better left unkept.

Fatal IT mistake No. 6: Unmitigated disaster
They thought they were ready for anything. An organization in a heavily regulated industry had spent millions building out a comprehensive disaster-recovery plan, including a dedicated fail-over data center humming with hundreds of virtual hosts and a Gigabit Ethernet connection.

But when an unplanned network outage cut the connection to its primary data center, the money the organization spent on its DR solution was for naught.

"The CTO did not have the confidence to activate the disaster-recovery plan, because they had never tested it," says Michael de la Torre, vice president of recovery services product management for SunGard Availability Services, which was called in by the organization later to shore up its DR strategy. "Instead, he stood by for more than a day hoping the circuit would be repaired. Everyone was offline that entire time. Employees had no access to email or data files, and the organization took a pretty big hit to its reputation."

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.