Archives

All posts by ccie14023

An old theory of personality holds that people fall into two types–A and B.  Put simply, Type A personalities are highly aggressive and competitive, whereas Type B are not.  We all have seen this broad difference in personalities.  Some people we encounter seem ready to walk over their own grandmothers to get ahead.  Like all stereotypes, this is a gross oversimplification, but there’s a lot of truth in it.

In the corporate world, type A personalities tend to rise to the top.  Why?  Because their very personality is aggressive and competitive.  They like to push, push, push for what they want and are willing to drive their agenda at any cost.  They frequently are talkers but rarely listeners.  They also judge people through their own lens.  If you’re an introvert, quiet, or deliberative, if you’re a listener instead of a talker, they think you are not “driven” and probably not worth listening to or promoting.

The question is:  does being type A make you right?  Does it make your opinions more valuable?  I cannot think of any reason why being aggressive and competitive makes you more likely to be correct about anything.  In fact, I think the opposite is true.  If you don’t listen well and are always pushing your own agenda, you’re less likely to consider the opinions of others, which means your decision-making is less well-rounded.

I’ve pointed out before, that quiet, deliberative people, type B’s, are often the ones you really want to listen to.  However, in the corporate world, they are often kicked to the curb.  “He never says anything in meetings,” the type A’s say.  Well, if you hired him maybe he’s actually a smart person but has a hard time contributing in a meeting with 20 people all talking fast.  Maybe he needs time to digest what he heard before providing recommendations.

Type A’s tend to be in positions of power not necessarily because they are smarter but because they fight for position in hierarchies.  This is not to say they are without value.  Their decisiveness and drive are very important to a healthy organization.  They can break indecision and move companies forward in ways type B’s cannot.

The key for both personality types is the old Greek maxim:  know thyself.  If you’re type A, you need to be careful not to be too aggressive.  Listen to your quieter colleagues, accept that they may have a different personality, and meet them where they are at.  Call on them in meetings, give them time to deliberate and come back to you.

For type B’s, you need to learn to speak out more.  You’re probably more respected than you realize, and when you do speak, you’re probably listened to.  Try to find forums that are more comfortable for you, like expressing your opinion in writing or 1:1’s.

Sadly, because of the cutthroat nature of the business world, I see little self-awareness and frequent domination of businesses by type A’s.  At the end they may get people to follow them, but if they’re leading you off a cliff, their drive may not be such a good thing.

When I worked at the San Francisco Chronicle, I started a project to bring Internet connectivity to a number of sites that had only limited mainframe circuits.  To do this I decided to get DSL lines and run IPSec over them, a relatively new way of doing things for the time.  It was a lot cheaper than the Frame Relay we used at larger sites.

After setting up connectivity at one of our sites, the local office manager called me.  Web pages, he said, were only loading partially.  Some of the text and none of the images would show up.

Everyone blamed the network for everything, so I punted him to desktop support.  I could ping across the tunnel, I could send traffic just fine, the latency was minimal, and nothing was obviously wrong.  The network is usually up or down, but web pages don’t partially load when everything else is working.  Degraded service might cause the pages to load slowly, but not partially.

The desktop guys told me it was my problem.  We had a constant battle, as nine times out of ten they blamed the network, and nine times out of ten it was not the network.  The office manager was getting angry, so I decided I would do some investigation on site and prove to the desktop guys that they were wrong.

I went to the office and fired up my laptop.  Pages were partially loading for me too.  Hmmm.  I did what every network engineer does and fired up a packet sniffer.

I could see the TCP handshake succeeding, and the browser requests and data exchange.  It looked normal, but why wasn’t the browser displaying the images?  I tried another browser and saw the same thing.

As I examined the sniffs, something hit me.  All the packets were being sourced with the Do not Fragment (DF) bit set in the header.  Could it be that the IPSec/GRE headers were causing the packets to be large enough to require fragmentation?  And why was Windows setting the DF bit anyways?

As I wasn’t a desktop guy, I left the latter question alone.  I jumped on the router and built a routing policy which cleared the DF bit on incoming packets.  The pages started loading fine.  I left the policy in place and hoped that there would not be any unanticipated consequences.  I never saw any.

Sometimes, it is, indeed, the network.

I’ve been thinking about the corporate world, how it operates, and the effects of corporatism on our lives.  If you’re a network engineer and think this is boring, pay attention.  Corporate culture, the influence of Wall Street, and the rise of a non-skilled management class have direct impact on your work and personal life.  The products you use are heavily influenced by corporate culture.  Why vendors release certain products, when, and how, are all controlled by corporate culture.  When a company tries to sell you something that doesn’t work and doesn’t serve your needs, when the company discontinues support for a product you bought after crashing and burning with it, when companies force products down your throat with buzzword messaging that means nothing to you, corporate culture explains it.

If you work in a corporation, the culture creates politics which affect what projects you work on, your career trajectory, and how you interact with your team.  In your personal life, the food you eat and drugs you take are very much explained by corporate culture.

I wrote in a previous post about the lack of anything permanent in the corporate world.  Everything seems to be temporary, everything is always in flux.  Companies are afflicted by short-term thinking, and short-term thinking is killing everyone.

One way this manifests itself is quarter-by-quarter thinking.  We all know sales people are judged on a quarterly basis, but corporations in general are as well.  Publicly traded companies have to present results to analysts, and thus to investors, every single quarter.  The results are compared against the last quarter, against the same quarter the previous year, and against other companies in the industry.  The results have a huge impact on stock price, executive compensation, and even executives’ jobs.

The effect of this trickles down to all levels of a public company.  Business units are judged by the quarterly performance of their products.  This means product managers are judged by the quarter, much like sales people.  Product managers are not commissioned directly like sales people, but they live and die by quarterly numbers.  As a result, they want to do everything possible to ensure quarterly numbers shine.

Now, imagine you are a product manager.  You have a deal worth, $20 million on the line if you deliver specific features the customer wants.  You are going to do anything possible to win the deal, so your quarterly numbers look good.  Now it probably is the case that the $20 million customer’s feature requests are specific to their environment.  That is, adding the features will help that one customer, but probably very few others.  So, instead of trying to build a product that caters to a broad range of customers who might bring smaller deals, you end up building a product that caters to a narrow set of customers that make you look good in your quarterly business reviews.

Now this type of short-term thinking might be an obvious problem if you planned to spend twenty years at your company.  But instead you spend two years at a company, so you only have to pull this off for eight quarters.  You can put big happy numbers in your LinkedIn profile (“successfully drove record quarter of $100 million in sales!”) and then exit stage right to repeat the process elsewhere.  And the folks left-behind have to clean up the mess.  Keep in mind your success within the company is also being judged by non-technical MBAs who are looking to do the same thing you are.

The companies that do the best long-term are those that eschew short-term thinking.  Apple is a great example of this.  They’ve had some disasters, but have generally taken risks to build products with long-term appeal.  I often mention Zappos founder Tony Hsieh, who while he had serious personal problems, forsook short-term gain for long-term performance.  Even within a company, quarterly thinking can vary by business unit and leader.

At the end of the day, however, it’s Wall Street that encourages this.  Like any metric, execs end up chasing their stock price like a dog chasing its tail.  It doesn’t get you anywhere, however much progress you may think you are making.  Meanwhile, you may get rich, but you leave disaster in your wake.

In my years in the corporate world, I’ve attended many corporate self-help type sessions on how to or increase leadership, creativity, and innovation.  There are many young consultants who are starting their careers off helping us to develop new skills in these areas, so I thought I would provide some helpful tips to get started.  Enjoy!

  1. If you are going to do consulting or presentations on innovation and leadership, it’s very important that you have never led anyone or invented anything.  Rather, you simply need to interview people who have done those things.  A lot of them.  Two or three thousand.  Actually, even if it’s only been 10 or 11, just say you’ve interviewed two or three thousand innovators or leaders.  This is called “research.”
  2. It’s especially important, if you are teaching career technology people how to innovate, that you loathe technology and cannot even upgrade your iPhone without help.  Remember, they may understand technology, but you understand how to innovate!  Two different things.
  3. You’re going to be making claims that are either wrong, or so obvious they don’t bear repeating.  Remember that you need to do several things to make those statements credible:
    • Begin by citing unverifiable claims from evolutionary biology as the basis for your statements.  Be sure to mention that we used to live out on open plains where we were at risk for being eaten.  Also be sure to mention “fight or flight.”  Bad:  “Strong leaders need to cultivate loyalty.” Good: “evolutionary biology has shown us that, back when we lived on the plain and were vulnerable to getting eaten by lions, our brains developed a need to be loyal to a leader.”
    • Next, cite the latest neuroscience to substantiate your claims.  In fact, it doesn’t have to be real neuroscience.  Remember, nobody will ever check!  Just say “the latest research on the brain has shown…” and leave it at that.  Bad:  “To be innovative we need time to think.”  Good:  “The latest neuroscience has shown that our brains can’t innovate when they are overwhelmed and don’t have time to reason properly.”
    • Remember, if you’re going to be hired by corporations and paid thousands of dollars in speaking fees, you need to state obvious truths in a technical way that makes you seem smart.  Invent new terminology so when you regurgitate to people what they already know, you sound authoritative.  For example, instead of saying, “criticism hurts people’s feelings and can cause them to leave,” invent a “criticism-despair cycle.”  Make a diagram with arrows showing “criticism->rejection->despair->attrition”.  See how much more impressive you sound already?
  4. It really helps if you are a “Doctor”.  There are many unaccredited diploma mills that will send you a Ph.D. based on your “life experience.”  Or better yet, just start calling yourself “doctor”.  Do you really think anyone will call and verify your doctorate?

Remember, the most lucrative careers don’t involve building skills through years of hard work, but telling people who know better than you how to do their jobs.  I hope you have a rewarding career as a consultant!

I’ve mentioned in the past how my first job in IT (starting in 1995) was as a “systems administrator” for a small company in Marin County, California.  The company designed and built museum exhibits, and its team of around 60 employees was split between fabricators, who built the exhibits, and office workers.  Some of the office workers did administrative work, while others were designers.  So, I was managing a network of around 30 computers, all Macs.

When I got to the company, the computers were networked using LocalTalk, a LAN technology from Apple, and specifically the PhoneNet variation.  PhoneNet was a product from Farallon Networks which enabled you to send the LocalTalk signal down a single pair of ordinary telephone wire.  The common practice was to use an extra pair of phone wires in the same cable that carried the user’s phone line.  In my first Netstalgia piece, I mentioned that my PhoneNet network was entirely passive, and ran into a lot of challenges as a result.

PhoneNet was also slow, and our designers had to transfer large files.  I decided to set up a separate Ethernet network for them.  All I knew about Ethernet was that it was faster, and that the higher-end PowerPC’s used by the designers supported it.  These computers had an AAUI port, a modification of the AUI port commonly in use for Ethernet connectivity at the time.  An AUI port required a transceiver to connect it to the Ethernet network.  Why?  Because we had Thicknet and Thinnet coaxial Ethernet, 10Base-T twisted pair and fiber optic Ethernet as well.  The universal AUI port (and Apple’s AAUI equivalent) gave you a choice of medium.

I didn’t really know how to make this work, and Google was not available at the time.  I had heard that you needed a “hub”, but I wasn’t sure exactly why or what the hub did.  The MacWarehouse catalogs I used to receive at the time advertised a product called an Etherwave, from the same company that made the PhoneNet transceiver.  The Etherwave allowed daisy-chaining of a twisted-pair Ethernet network.  I don’t know why, but this seemed easier and cheaper to me that buying a hub.  It was neither.

Farallon Etherwave Adapter

I bought a bunch of Etherwave adapters, got a ladder, and spend a night running Cat 3 cable in the suspended ceiling, and crimping RJ45 cubes.  Finally, I daisy chained everything together, and switched the computers to the new Ethernet network.  It worked very well–file transfers were screaming!

The designers loved it, but there was a flaw.  The Ethernet network was not connected at all to the LocalTalk network.  The LocalTalk network was where email, printing, and many other services resided.  Their computers had connections to both, but they had to go to a control panel and switch between one or the other.  That meant, if they wanted to do a file transfer, the two designers would have to shout to each other to switch networks, at which point they could do it peer-to-peer.

There was another problem.  Apple’s networking software, called OpenTransport, was notoriously buggy.  The switches between Ethernet and LocalTalk resulted in frequent crashes and reboots.  The initial thrill was wearing off.

I searched through catalogs and primitive websites looking for a solution.  I learned that I could buy a device called a router to connect the Ethernet and LocalTalk into a single unified network.  I desperately looked for the cheapest one.  My go-to vendor, Farallon, made a router but it was way too expensive.  Finally, I found a cheap router called a PathFinder manufactured by Dayna systems.

I went to my boss, the VP of operations.  I showed her the price (maybe $800?) and she balked.  This company ran a tight ship, and she said we couldn’t afford it.

I went back to our head designer and asked her to keep a post-in on her computer for a day, with a tally mark each time she had to reboot due to the Ethernet to LocalTalk switching.  Then we timed how long it took her to reboot.  I went back to my boss and showed her how much time was being lost each day to a single designer.  Her left hand flew over her the buttons on the calculator on her desk, then she looked up at me.  “Buy the router,” she said.

The PathFinder did indeed fix the problem.  And so my first Ethernet network, as well as my first experience configuring a router, came at a company in 1995 with 30 Macs, and I’ve spent decades working with both technologies since those days.

Some years ago, I worked for a company with a CEO who had a background in marketing.  It was 2010, and he decided to use his marketing skills to launch a huge new campaign called “Mission 10”.  Our goal:  to become the next $10 billion company.  At the time I think I revenue was less than five billion.  Slick slides were drawn up, pep rally company meetings were held, and everyone in the company began pivoting their work to fit the new agenda.  Anyone who has worked in the corporate world has been there more than once.  Suddenly every initiative had to have a “Mission 10” theme.

The problem?  Despite the rah-rah of our CEO, we never achieved even close to $10 billion in revenue.  In fact, that company is still below $5 billion last I checked.  The bigger problem?  The CEO moved three years after that, having never really achieved this or any other goal he set.  He later ended up CEO of an even larger and more famous company that has nothing to do with technology.  This is known as “failing upward”.

In light of the “great resignation” I’d like to write a little about permanence, or the lack thereof.  We live in a temporary world.  People pick up a job and stay for two or three years, and then move on.  This was true even before COVID.  I myself have several two-year stints on my resume.  The longest I’ve worked anywhere is six.

Three years is just long enough to kick off some major initiative and get out at the peak, just before the whole thing crashes.  The damage done by corporate executives pursuing this short-term strategy is massive.  It works like this.  An exciting new executive is hired on from a big company.  The new executive launches a new product, architecture, marketing campaign, acquisition, whatever.  Everyone rallies around it because, well he’s the boss, and because if you want funding for anything it needs to be tied to the boss’ initiative.  The new initiative (let’s say it’s a product) is pumped up with cash, the marketing engine kicks in, the company oversells the product, and then customers start snapping it up.  It doesn’t perform as expected.  Things start to crash.  Money dries up.  The executive exits.   And whoever decides to stay is left picking up the pieces of the mess that this guy created.

In Ancient Greece, you faced consequences for this sort of thing, usually exile, sometimes death.  While I’m not advocating the death penalty for corporate screw ups (although in some industries they do cost lives), what’s fascinating is that in the corporate world, the consequences are the opposite.  Said executive who just screwed up royally walks away with huge bonuses, lots of stock, has a nice sabbatical, and begins the cycle again somewhere else.

If you think executives are the only problem, think again.  It happens at every level of the corporate world.  When a junior IT guy messes up a new system and then bolts for another job, you have the same issue at a smaller scale.  He just doesn’t get the bonus and sabbatical.  As a leader of technical marketing engineers, we face all sorts of challenges when an experienced TME leaves and takes knowledge with him.  Features can be stalled when the people who were working on them leave.

In my grandfather’s era, and even my father’s, it was expected that you would start and end your career at the same company.  There was an expectation of permanence.  People were proud of their companies and how they were treated, and bragged about the excellent pension they’d receive when leaving.  Now, we spend three years and jump ship to boost our salary.

Companies, are of course, largely responsible.  Often they don’t create the sort of employment experience that anyone would want to tolerate for long.  People stopped being human beings and started becoming human “resources”.  Executives, under various pressures, began to see their workforce as mere “metrics” to be manipulated as they learned in their B-school classes.  Times are good?  Dial up the workforce.  Times are bad?  Lay off 3%.  People are just numbers on a slide to many execs, and the difficulties of terminating employment are a remote problem to be dealt with by line managers.  As a result, employment is not a long-term commitment but a short-term business transaction on both ends.

The temporary workforce has an interesting effect on longer-term employees as well.  Someone who has worked at the same company for 15 or 20 years sees executives and initiatives come and go, ebbing and flowing like the tide on a beach.  They often develop an apathy and callousness that makes their own work unproductive.  They tend to focus on the day-to-day instead of the long-term, and often dismissively ignore the plans of new leadership, figuring the leaders will just be replaced and the cycle will start over.  Thus, while they have a long-term career, they often have a short-term level of focus.

We all live in a temporary world now, and permanence is in short supply.  If you want to understand why companies build bad products, why executives start disastrous programs and leave, and why there never seem to be consequences, this is a huge part of it.  I don’t really have a solution I’m afraid. Some of the causes are:  greedy hedge-fund finance people who take over corporate boards, an undisciplined corporate press/media, the instant availability of information leading to a lack of deliberation, and the rise of a management class who are not actually experts in anything other than management itself.  We can all point fingers at ourselves for up and going when the going gets tough.

The Greek philosopher Heraclitus famously said that you cannot step in the same river twice.  He meant that, if you cross a river, each time you take a step the water you were originally standing in has passed on, and you’re in new water.  Thus, there’s really no river.  Sometimes the tech world, and the corporate world in general, feel like Heraclitus’ River.  Even if you stand in one place, everything just moves on.

I have to give AWS credit for posting a fairly detailed technical description of the cause of their recent outage.  Many companies rely on crisis PR people to phrase vague and uninformative announcements that do little to inform customers and put their minds at ease.  I must admit, having read the AWS post-mortem a couple times, I don’t fully understand what happened, but it seems my previous article on automation running wild was not far off.  Of course, the point of the article was not to criticize automation.  An operation the size of AWS would be simply impossible without it.  The point was to illustrate the unintended consequences of automation systems.  As a pilot and aviation buff, I can think of several examples of airplanes crashing due to out-of-control automation as well.

AWS tells us that “an automated activity to scale capacity of one of the AWS services hosted in the main AWS network triggered an unexpected behavior from a large number of clients inside the internal network.”  What’s interesting here is that the automation event was not itself a provisioning of network devices.  Rather, the capacity increase caused “a large surge of connection activity that overwhelmed the networking devices between the internal network and the main AWS network…”  This is just the old problem of overwhelming link capacity.  I remember one time, when I was at Juniper, and a lab device started sending a flood of traffic to the Internet, crushing the Internet-facing firewalls.  It’s nice to know that an operation like Amazon faces the same challenges.  At the end of the day, bandwidth is finite, and enough traffic will ruin any network engineer’s day.

“This congestion immediately impacted the availability of real-time monitoring data for our internal operations teams, which impaired their ability to find the source of congestion and resolve it.”  This is the age-old problem, isn’t it?  Monitoring our networks requires network connectivity.  How else do we get logs, telemetry, traps, and other information from our devices?  And yet, when our network is down, we can’t get this data.  Most large-scale customers do maintain a separate out-of-band network just for monitoring.  I would assume Amazon does the same, but perhaps somehow this got crushed too?  Or perhaps what they refer to as their “internal network” was the OOB network?  I can’t tell from the post.

“Operators continued working on a set of remediation actions to reduce congestion on the internal network including identifying the top sources of traffic to isolate to dedicated network devices, disabling some heavy network traffic services, and bringing additional networking capacity online. This progressed slowly…”  I don’t want to take pleasure in others’ pain, but this makes me smile.  I’ve spent years telling networking engineers that no matter how good their tooling, they are still needed, and they need to keep their skills sharp.  Here is Amazon, with presumably the best automation and monitoring capabilities of any network operator, and they were trying to figure out top talkers and shut them down.  This reminds me of the first broadcast storm I faced, in the mid-1990’s.  I had to walk around the office unplugging things until I found the source.  Hopefully it wasn’t that bad for AWS!

Outages happen, and Amazon has maintained a high-level of service with AWS since the beginning.  The resiliancy of such a complex environment should be astounding to anyone who has built and managed complex systems.  Still, at the end of the day, no matter how much you automate (and you should), no matter how much you assure (and you should), sometimes you have to dust off the packet sniffer and figure out what’s actually going down the wire.  For network engineers, that should be a reminder that you’re still relevant in a software-defined world.

As I write this, a number of sites out on the Internet are down because of an outage at Amazon Web Services.  Delta Airlines is suffering a major outage.  On a personal note, my wife’s favorite radio app and my Lutron lighting system are not operating correctly.  Of course, this outage is a reminder of the simple principle of not putting one’s eggs in a single basket.  AWS became the dominant web provider early on, but there are multiple viable alternatives now.  Long before the modern cloud emerged, I regularly ran disaster recovery exercises to ensure business continuity when a data center or service provider failed.  Everyone who uses a cloud provider better have a backup, and you better figure out a way to periodically test that backup.  A few startups have emerged to make this easier.

While the cause of the outage is yet unknown, there was an interesting comment in an Newsweek article on the outage.  Doug Madory, director of internet analysis an Kentik Inc, said:  “More and more these outages end up being the product of automation and centralization of administration…”  I’ve been involved in automation in some form or another for my entire six years at Cisco, and one aspect of automation is not talked about enough:  automation gone wild.  Let me give a non-computer example.

Back when I worked at the San Francisco Chronicle, the production department installed a new machine in our Union City printing plant.  The Sunday paper, back then, had a large number of inserts with advertisements and circulars that needed to be stuffed into the paper.  They were doing this manually, if you can believe it.

The new machine had several components.  One part of the process involved grabbing the inserts and carrying them in a conveyor system high above the plant floor, before dropping them down into the inserter.  It’s hard to visualize, so I’ve included a picture of a similar machine.

You can see the inserts coming in via the conveyor, hanging vertically.  This conveyor extended quite far.  One day I was in the plant, working on some networking thing or other, and the insert machine was running.  I looked back and saw the conveyor glitch somehow, and then a giant ball of paper started to form in the corner of the room, before finally exploding and raining paper down on the floor of the plant.  There was a commotion and one of the workers had to shut the machine down.

The point is, automation is great until it doesn’t work.  When it fails, it fails big.  You don’t just get a single problem, but a compounding problem.  It wasn’t just a single insert that got hit by the glitch, but dozens of them, if not more.  When you use manual processes, failures are contained.

Let’s tie this back to networking.  Say you need to configure hundreds of devices with some new code, perhaps adding a new routing protocol.  If you do it by hand in one device, and suddenly routes start dropping out of the routing table, chances are you won’t proceed with the other devices.  You’ll check your config to see what happened and why.  But if you set up, say, a Python script to run around and do this via NETCONF to 100 devices, suddenly you might have a massive outage on your hands.  The same could happen using a tool like Ansible, or even a vendor network management platform.

There are ways to combat this problem, of course.  Automated checks and validation after changes is an important one, but the problem with this approach is you cannot predict every failure.  If you program 10 checks, it’s going to fail in way #11, and you’re out of luck.

As I said, I’ve spent years promoting automation.  You simply couldn’t build a network like Amazon’s without it.  And it’s critical for network engineers to continue developing skills in this area.  We, as vendors and promoters of automation tools, need to be careful how we build and sell these tools to limit customer risk.

Eventually they got the inserter running again.  Whatever the cause of Amazon’s outage, let’s hope it’s not automation gone wild.

How often have you learned about a new technology, and couldn’t understand it?  How many trainings and presentations have you sat through that left you in a mental fog?  It amazes me how many technologies we are supposed to master in our industry, and how many we never do.

Let me give an example.  When I heard about “Cloud Computing” I could not, for the life of me, understand what it meant.  I went to meeting after meeting where we talked about “the Cloud” without any understanding of what it actually was.  I knew I used clouds a lot of Visio diagrams, but the MBA-types who were telling me we needed to migrate to the cloud would never be able to understand the Visio diagrams that network engineers make.  It seemed to involve using centralized computing resources, but I’d been doing this for years.  My first ISP accounts were shell accounts.  My email and other services were hosted on their computers.  Nothing was new about this.  In fact, Larry Ellison gave a hilarious talk in which he asked “What the hell is Cloud Computing?”

We all know the “cloud” has in fact made significant changes in how we engineer computing resources, but the truth is, the idea of centralized “compute” is not a new one.  (Side note:  I hate turning nouns into verbs.  “Compute”, “spend”, and “ask” are verbs, not nouns.  The MBAs who invent these terms apparently don’t have to study grammar.)  The scale is certainly different, but we all know that mainframes had both centralized computing and virtualization long before anyone said “cloud.”

SDN is another one.  I was told we needed SDN, but I couldn’t figure out what it meant.  I was a hard-core routing protocols guy.  BGP and OSPF are software.  Ergo, networks are already software defined.

Someone sent me a video from Nicira, later acquired by VMware.  The vague video described slicing networks into pools, or something like that.  I couldn’t understand what this meant.  Like a VLAN?  I finally found a document that described SDN as separation of the control plane from the data plane.  OK, but we already had been doing that in routers and switches for years?  Yes, but SDN was a centralized control plane.  Kind of like BGP route reflectors?  I couldn’t figure it out.  I spent some time getting OpenFlow up and running to try to understand it from the ground up.  What a waste that was.  Whatever SDN has become, it’s certainly not what it was originally defined to be.  And don’t get me started on SASE.

I used to think maybe I was stupid, but now I realized all of these things confused me because they were (a) confusing in themselves, or (b) so badly explained that nobody really understood them.  A little more detail:

  • Some technologies are simply vague marketing terms.  They don’t correspond to anything precise in reality.
  • Some technologies do correspond to reality, but they are simply bucket terms.  That is, the marketers took five, six, ten technologies, and slapped a new label on them.  In this case, you’re looking for some precise definition of term X and you realize term X refers to ten different things at once.
  • Sometimes new technologies are invented, and the inventors don’t want to cough up too much proprietary information.  So the produce vaguely worded marketing content that appeals to “analysts” with MBAs in marketing, but which technical people realize are meaningless.  Said “analysts” now run around creating hype (“You need software-defined cloud secure-access zero trust!”) and now we’re told we to implement it.
  • A lot of technical people are really bad explainers.  Sometimes there is a new technology which is clear and well-defined, but the people sent to explain it are completely incapable of explaining anything at all.

My point is, it’s ok to be confused.  A lot of times we’re in the room and everyone seems to be getting it, but we have no idea what is going on.  Chances are, nobody else really understands what is being said either.  Ask questions, drill down, and if you don’t understand something, chances are it’s hot air.  In a world where we prioritize talk over reality, there seems to be an abundance of that.

As a part of my job at Cisco I’ve been looking into Zscaler and their offerings.  It started me thinking back to the early days of remote access, and I figured it would make a good topic for Netstalgia.

I wrote in the past about how bulletin board systems (BBSs) work, and in another article I resurrected my old BBS in an Apple II emulator.  In a nutshell, a computer with a BBS set up had a modem on it and users dialed in using their own modem over dial-up phone lines.  I’m not sure how many readers are young and don’t remember modems, and how many are dinosaurs like me, but as a reminder, modems connect computers to phone lines.  One modem is set to answer any call that comes in, and waits.  Then another user with a modem inputs the phone number of the other end into his software.  His modem dials out, the phone rings, and the other modem answers with a carrier tone.  Then the dialer responds and after some negotiation on the line, a connection is established and data is sent.

Now in my first job, at a small company in Marin California in the mid-1990’s, we had one computer set up as a dedicated remote access server.  It had a single modem with a single phone line, and ran Apple Remote Access server, since we were a Mac shop.  We only had one user with a laptop, the CEO, so when he traveled he would dial-in and be able to access basic functions like email and our file server.  There was no Internet access back then.

When I moved on to a consulting company, I did a few more industrial set ups.  Usually these involved remote access servers that were comprised of a bunch of modems and a LAN port.  The remote access server would accept a bunch of phone lines and then provide TCP/IP or AppleTalk connectivity to the network.  By this time users had Internet connectivity.  The Shiva LanRover is one example of this sort of device.

Shiva LANRover

When I worked at the San Francisco Chronicle, we had an Ascend Max which served this purpose.  The Max had two DS3 lines plugged into it.  It was the first time I had seen a DS3, and I remember being excited to learn the phone company could deliver a circuit over coax.  (It actually entered the building on fiber and went over coax from the MPOE.)  The DS3 was an ISDN PRI, with 24 dial-in phone lines multiplexed over a single digital circuit.  It took me months to find someone who had the password to the Max, and when I finally got in I found out that the second DS3 was unconfigured.  Users had been complaining about busy signals and all I had to do was change a few menu settings.

Ascend Max

Remote access dial-up was heavily used at the Chronicle.  Reporters filed their stories via modem.  VPN was just coming out, and I decided to replace the dial-up with VPN + dial-up.  A company called Fiberlink provided a dialer with a vast database of local Internet dial-up lines from a variety of carriers they contracted with.  Our users would pick a local phone line and then dial into it.  They then launched our Nortel VPN client to establish connectivity.  This saved us a fortune on 800-number charges, but our users hated it.  As a good senior guy, I did the initial design and left implementation to a junior guy.  I’m amazed he still talks to me.  (And he’s not junior anymore!)

Despite being a long-time Cisco guy, I never touched the Cisco remote access stuff.  I did use 2500-series routers with serial ports as terminal servers in the lab, but I never connected modems to them.  Still, when I passed my CCNP, one exam covered remote access and I needed to know a lot about modems.

Nowadays I rarely log into VPN.  Most systems I need to access can authenticate through our Zero Trust/SSO system without the need for a connection to Cisco’s network.  We’ve come a long way since the days of dial-up.  And while I said I missed wiring in another post, I sure don’t miss modem tones!