Sysadmin / Network Ops
185 stories
·
8 followers

Adult

4 Comments and 10 Shares
(1) That shopping cart is full of AirHeads, and (2) I died at 41 from what the AirHeads company spokesperson called 'probably natural causes.'
Read the whole story
supine
3127 days ago
reply
An Aussie living in Frankfurt
chrisamico
3127 days ago
reply
Boston, MA
Share this story
Delete
4 public comments
darastar
3116 days ago
reply
Yep.
bibliogrrl
3126 days ago
reply
This is me every day of my life.
Chicago!
johnparkinson
3127 days ago
reply
Adulting in a nutshell.
London
alt_text_bot
3127 days ago
reply
(1) That shopping cart is full of AirHeads, and (2) I died at 41 from what the AirHeads company spokesperson called 'probably natural causes.'

On the 30th anniversary of the Space Shuttle Challenger disaster

1 Comment and 5 Shares

Today is the 30th anniversary of the final launch and subsequent catastrophic loss of the Space Shuttle Challenger. Popular Mechanics has an oral history of the launch and aftermath.

Capano: We got the kids quiet, and then I remember that the line that came across the TV was "The vehicle has exploded." One of the girls in my classroom said, "Ms. Olson [Capano's maiden name], what do they mean by 'the vehicle'?" And I looked at her and I said, "I think they mean the shuttle." And she got very upset with me. She said, "No! No! No! They don't mean the shuttle! They don't mean the shuttle!"

Raymond: The principal came over the PA system and said something like, "We respectfully request that the media leave the building now. Now." Some of the press left, but some of them took off into the school. They started running into the halls to get pictures, to get sound-people were crying, people were running. It was chaos. Some students started chasing after journalists to physically get them out of the school.

I have certainly read about Feynman's O-ring demonstration during the investigation of the disaster, but I hadn't heard this bit:

Kutyna: On STS-51C, which flew a year before, it was 53 degrees [at launch, then the coldest temperature recorded during a shuttle launch] and they completely burned through the first O-ring and charred the second one. One day [early in the investigation] Sally Ride and I were walking together. She was on my right side and was looking straight ahead. She opened up her notebook and with her left hand, still looking straight ahead, gave me a piece of paper. Didn't say a single word. I look at the piece of paper. It's a NASA document. It's got two columns on it. The first column is temperature, the second column is resiliency of O-rings as a function of temperature. It shows that they get stiff when it gets cold. Sally and I were really good buddies. She figured she could trust me to give me that piece of paper and not implicate her or the people at NASA who gave it to her, because they could all get fired.

I wondered how I could introduce this information Sally had given me. So I had Feynman at my house for dinner. I have a 1973 Opel GT, a really cute car. We went out to the garage, and I'm bragging about the car, but he could care less about cars. I had taken the carburetor out. And Feynman said, "What's this?" And I said, "Oh, just a carburetor. I'm cleaning it." Then I said, "Professor, these carburetors have O-rings in them. And when it gets cold, they leak. Do you suppose that has anything to do with our situation?" He did not say a word. We finished the night, and the next Tuesday, at the first public meeting, is when he did his O-ring demonstration.

We were sitting in three rows, and there was a section of the shuttle joint, about an inch across, that showed the tang and clevis [the two parts of the joint meant to be sealed by the O-ring]. We passed this section around from person to person. It hit our row and I gave it to Feynman, expecting him to pass it on. But he put it down. He pulled out pliers and a screwdriver and pulled out the section of O-ring from this joint. He put a C-clamp on it and put it in his glass of ice water. So now I know what he's going to do. It sat there for a while, and now the discussion had moved on from technical stuff into financial things. I saw Feynman's arm going out to press the button on his microphone. I grabbed his arm and said, "Not now." Pretty soon his arm started going out again, and I said, "Not now!" We got to a point where it was starting to get technical again, and I said, "Now." He pushed the button and started the demonstration. He took the C-clamp off and showed the thing does not bounce back when it's cold. And he said the now-famous words, "I believe that has some significance for our problem." That night it was all over television and the next morning in the Washington Post and New York Times. The experiment was fantastic-the American public had short attention spans and they didn't understand technology, but they could understand a simple thing like rubber getting hard.

I never talked with Sally about it later. We both knew what had happened and why it had happened, but we never discussed it. I kept it a secret that she had given me that piece of paper until she died [in 2012].

Whoa, dang. Also not well known is that the astronauts survived the initial explosion and were possibly alive and conscious when they hit the water two and a half minutes later.

Over the December holiday, I read 10:04 by Ben Lerner (quickly, recommended). The novel includes a section on the Challenger disaster and how very few people saw it live:

The thing is, almost nobody saw it live: 1986 was early in the history of cable news, and although CNN carried the launch live, not that many of us just happened to be watching CNN in the middle of a workday, a school day. All other major broadcast stations had cut away before the disaster. They all came back quickly with taped replays, of course. Because of the Teacher in Space Project, NASA had arranged a satellite broadcast of the mission into television sets in many schools -- and that's how I remember seeing it, as does my older brother. I remember tears in Mrs. Greiner's eyes and the students' initial incomprehension, some awkward laughter. But neither of us did see it: Randolph Elementary School in Topeka wasn't part of that broadcast. So unless you were watching CNN or were in one of the special classrooms, you didn't witness it in the present tense.

Oh, the malleability of memory. I remember seeing it live too, at school. My 7th grade English teacher permanently had a TV in her room and because of the schoolteacher angle of the mission, she had arranged for us to watch the launch, right at the end of class. I remember going to my next class and, as I was the first student to arrive, telling the teacher about the accident. She looked at me in disbelief and then with horror as she realized I was not the sort of kid who made terrible stuff like that up. I don't remember the rest of the day and now I'm doubting if it happened that way at all. Only our classroom and a couple others watched it live -- there wasn't a specially arranged whole-school event -- and I doubt my small school had a satellite dish to receive the special broadcast anyway. Nor would we have had cable to get CNN...I'm not even sure cable TV was available in our rural WI town at that point. So...?

But, I do remember the jokes. The really super offensive jokes. The jokes actually happened. Again, from 10:04:

I want to mention another way information circulated through the country in 1986 around the Challenger disaster, and I think those of you who are more or less my age will know what I'm talking about: jokes. My brother, who is three and a half years older than I, would tell me one after another as we walked to and from Randolph Elementary that winter: Did you know that Christa McAuliffe was blue-eyed? One blew left and one blew right; What were Christa McAuliffe's last words to her husband? You feed the kids -- I'll feed the fish; What does NASA stand for? Need Another Seven Astronauts; How do they know what shampoo Christa McAuliffe used? They found her head and shoulders. And so on: the jokes seemed to come out of nowhere, or to come from everywhere at once; like cicadas emerging from underground, they were ubiquitous for a couple of months, then disappeared. Folklorists who study what they call 'joke cycles' track how -- particularly in times of collective anxiety -- certain humorous templates get recycled, often among children.

At the time, I remember these jokes being hilarious1 but also a little horrifying. Lerner continues:

The anonymous jokes we were told and retold were our way of dealing with the remainder of the trauma that the elegy cycle initiated by Reagan-Noonan-Magee-Hicks-Dunn-C.A.F.B. (and who knows who else) couldn't fully integrate into our lives.

Reminds me of how children in Nazi ghettos and concentration camps dealt with their situation by playing inappropriate games.

Even in the extermination camps, the children who were still healthy enough to move around played. In one camp they played a game called "tickling the corpse." At Auschwitz-Birkenau they dared one another to touch the electric fence. They played "gas chamber," a game in which they threw rocks into a pit and screamed the sounds of people dying.

  1. Also, does anyone remember the dead baby jokes? They were all the rage when I was a kid. There were books of them. "Q: What do you call a dead baby with no arms and no legs laying on a beach? A: Sandy." And we thought they were funny as hell.

Tags: 10:04   Ben Lerner   books   NASA   Richard Feynman   science   space   Space Shuttle   video
Read the whole story
supine
3216 days ago
reply
An Aussie living in Frankfurt
digdoug
3218 days ago
reply
Louisville, KY
Share this story
Delete
1 public comment
satadru
3217 days ago
reply
I get chills reading the Sally Ride anecdote. Imagine working in an organization where the culture requires you to pass safety information by subterfuge.
New York, NY

The Twelve Days of Crisis – A Retrospective on Linode’s Holiday DDoS Attacks

1 Share

View a printable version of this post here.

Over the twelve days between December 25th and January 5th, Linode saw more than a hundred denial-of-service attacks against every major part of our infrastructure, some severely disrupting service for hundreds of thousands of Linode customers.

I’d like to follow up on my earlier update by providing some more insight into how we were attacked and what we’re doing to stop it from happening again.

Linode Attack Points

Pictured above is an overview of the different infrastructure points that were attacked. Essentially, the attacker moved up our stack in roughly this order:

  • Layer 7 (“400 Bad Request”) attacks toward our public-facing websites
  • Volumetric attacks toward our websites, authoritative nameservers, and other public services
  • Volumetric attacks toward Linode network infrastructure
  • Volumetric attacks toward our colocation provider’s network infrastructure

Most of the attacks were simple volumetric attacks. A volumetric attack is the most common type of distributed denial-of-service (DDoS) attack in which a cannon of garbage traffic is directed toward an IP address, wiping the intended victim off the Internet. It’s the virtual equivalent to intentionally causing a traffic-jam using a fleet of rental cars, and the pervasiveness of these types of attacks has caused hundreds of billions of dollars in economic loss globally.

Typically, Linode sees several dozen volumetric attacks aimed toward our customers each day. However, these attacks almost never affect the wider Linode network because of a tool we use to protect ourselves called remote-triggered blackholing. When an IP address is “blackholed,” the Internet collectively agrees to drop all traffic destined to that IP address, preventing both good and bad traffic from reaching it. For content networks like Linode, which have hundreds of thousands of IPs, blackholing is a blunt but crucial weapon in our arsenal, giving us the ability to ‘cut off a finger to save the hand’ – that is, to sacrifice the customer who is being attacked in order to keep the others online.

Blackholing fails as an effective mitigator under one obvious but important circumstance: when the IP that’s being targeted – say, some critical piece of infrastructure – can’t go offline without taking others down with it. Examples that usually come to mind are “servers of servers,” like API endpoints or DNS servers, that make up the foundation of other infrastructure. While many of the attacks were against our “servers of servers,” the hardest ones for us to mitigate turned out to be the attacks pointed directly toward ours and our colocation providers’ network infrastructure.

Secondary Addresses

The attacks leveled against our network infrastructure were relatively straightforward, but mitigating them was not. As an artifact of history, we segment customers into individual /24 subnets, meaning that our routers must have a “secondary” IP address inside each of these subnets for customers to use as their network gateways. As time has gone by, our routers have amassed hundreds of these secondary addresses, each a potential target for attack.

Of course, this was not the first time that our routers have been attacked directly. Typically, special measures are taken to send blackhole advertisements to our upstreams without blackholing in our core, stopping the attack while allowing customer traffic to pass as usual. However, we were unprepared for the scenario where someone rapidly and unpredictably attacked many dozens of different secondary IPs on our routers. This was for a couple of reasons. First, mitigating attacks on network gear required manual intervention by network engineers which was slow and error-prone. Second, our upstream providers were only able to accept a limited number of blackhole advertisements in order to limit the potential for damage in case of error.

After several days of playing cat-and-mouse games with the attacker, we were able to work with our colocation providers to either blackhole all of our secondary addresses, or to instead drop the traffic at the edges of their transit providers’ networks where blackholing wasn’t possible.

Cross-Connects

The attacks targeting our colocation providers were just as straightforward, but even harder to mitigate. Once our routers were no longer able to be attacked directly, our colocation partners and their transit providers became the next logical target – specifically, their cross-connects. A cross-connect can generally be thought of as the physical link between any two routers on the Internet. Each side of this physical link needs an IP address so that the two routers can communicate with each other, and it was those IP addresses that were targeted.

As was the case with our own infrastructure, this method of attack was not novel in and of itself. What made this method so effective was the rapidity and unpredictability of the attacks. In many of our datacenters, dozens of different IPs within the upstream networks were attacked, requiring a level of focus and coordination between our colocation partners and their transit providers which was difficult to maintain. Our longest outage by far – over 30 hours in Atlanta – can be directly attributed to frequent breakdowns in communication between Linode staff and people who were sometimes four-degrees removed from us.

We were eventually able to completely close this attack vector after some stubborn transit providers finally acknowledged that their infrastructure was under attack and successfully put measures in place to stop the attacks.

Lessons Learned

On a personal level, we’re embarrassed that something like this could have happened, and we’ve learned some hard lessons from the experience.

Lesson one: don’t depend on middlemen

In hindsight, we believe the longer outages could have been avoided if we had not been relying on our colocation partners for IP transit. There are two specific reasons for this:

First, in several instances we were led to believe that our colocation providers simply had more IP transit capacity than they actually did. Several times, the amount of attack traffic directed toward Linode was so large that our colocation providers had no choice but to temporarily de-peer with the Linode network until the attacks ended.

Second, successfully mitigating some of the more nuanced attacks required the direct involvement of senior network engineers from different Tier 1 providers. At 4am on a holiday weekend, our colocation partners became an extra, unnecessary barrier between ourselves and the people who could fix our problems.

Lesson two: absorb larger attacks

Linode’s capacity management strategy for IP transit has been simple: when our peak daily utilization starts approaching 50% of our overall capacity, then it’s time to get more links.

This strategy is standard for carrier networks, but we now understand that it is inadequate for content networks like ours. To put some real numbers on this, our smaller datacenter networks have a total IP transit capacity of 40Gbps. This may seem like a lot of capacity to many of you, but in the context of an 80Gbps DDoS that can’t be blackholed, having only 20Gbps worth of headroom leaves us with crippling packet loss for the duration of the attack.

Lesson three: let customers know what’s happening

It’s important that we acknowledge when we fail, and our lack of detailed communication during the early days of the attack was a big failure.

Providing detailed technical updates during a time of crisis can only be done by those with detailed knowledge of the current state of affairs. Usually, those people are also the ones who are firefighting. After things settled down and we reviewed our public communications, we came to the conclusion that our fear of wording something poorly and causing undue panic led us to speak more ambiguously than we should have in our status updates. This was wrong, and going forward, a designated technical point-person will be responsible for communicating in detail during major events like this. Additionally, our status page now allows customers to be alerted about service issues by email and SMS text messaging via the “Subscribe to Updates” link.

Our Future is Brighter Than our Past

With these lessons in mind, we’d like you to know how we are putting them into practice.

First, the easy part: we’ve mitigated the threat of attacks against our public-facing servers by implementing DDoS mitigation. Our nameservers are now protected by Cloudflare, and our websites are now protected by powerful commercial traffic scrubbing appliances. Additionally, we’ve made sure that the emergency mitigation techniques put in place during these holiday attacks have been made permanent.

By themselves, these measures put us in a place where we’re confident that the types of attacks that happened over the holidays can’t happen again. Still, we need to do more. So today I’m excited to announce that Linode will be overhauling our entire datacenter connectivity strategy, backhauling 200 gigabits of transit and peering capacity from major regional points of presence into each of our locations.

Upgraded Newark Diagram
Carriers shown are for example purposes only. All product names and logos are the property of their respective owners.

Here is an overview of forthcoming infrastructure improvements to our Newark datacenter, which will be the first to receive these capacity upgrades. The headliner of this architecture is the optical transport networks that we have already begun building out. These networks will provide fully diverse paths to some of the most important PoPs in the region, giving Linode access to hundreds of different carrier options and thousands of direct peering partners.

Compared to our existing architecture, the benefits of this upgrade are obvious. We will be taking control of our entire infrastructure, right up to the very edge of the Internet. This means that, rather than depending on middlemen for IP transit, we will be in direct partnership with the carriers who we depend on for service. Additionally, Linode will quintuple the amount of bandwidth available to us currently, allowing us to absorb extremely large DDoS attacks until properly mitigated. As attack sizes grow in the future, this architecture will quickly scale to meet their demands without any major new capital investment.

Final Words

Lastly, sincere apologies are in order. As a company that hosts critical infrastructure for our customers, we are trusted with the responsibility of keeping that infrastructure online. We hope the transparency and forward-thinking in this post can regain some of that trust.

We would also like to thank you for your kind words of understanding and support. Many of us had our holidays ruined by these relentless attacks, and it’s a difficult thing to try and explain to our loved ones. Support from the community has really helped.

We encourage you to post your questions or comments below.

Read the whole story
supine
3217 days ago
reply
An Aussie living in Frankfurt
Share this story
Delete

Using sensu redaction

1 Share

Sensu has a lot of cool features, but some of them are rarely used because either the documentation isn’t massively clear, or people deem it a “bit hard”. One of these cool features is redaction of passwords. You may have seen many a sensu check in the uchiwa dashboard with the password hanging out on the command line call used to run the sensu plugin. This isn’t much fun, especially when you consider that many remote services require some kind of API key or password to check their health.

Well, fortunately, this is pretty easy to rectify with a few tweaks.

It’s worth mentioning at this point that I use puppet to deploy my sensu clients and checks, so this will assume you’re doing the same. I’ll try include pure JSON examples where possible so that those of you that use other config management tools can follow along.

Let’s redact some shit

A fundamental thing to understand with redaction is that the redaction key you want to use is actually set inside the client config, not inside the check. You have to define on the client config:

  • What things you want to redact
  • What the password for $service will be

Redacting fields

Determining what to redact is fairly easy; just include a “redact” key in the client’s json.

If you use Puppet, you can do this really easily using the sensu-puppet module using sensu::redact (note, at time of writing this hasn’t been merged, but hoping it will soon!)

class { 'sensu':
  redact => [ 'password', 'pass', 'api_key' ]
}

Or, if like me, you use hiera, setting the following in your common.yaml

sensu::redact
  - "password"
  - "pass"
  - "api_key"

This will result in the following JSON in your client config:

{
  "client": {
    "name": "client",
    "address": "192.168.4.21",
    "subscriptions": [
      "base"
    ],
    "safe_mode": false,
    "keepalive": {
      "handler": "opsgenie",
      "thresholds": {
        "warning": 45,
        "critical": 90
      }
    },
    "redact": [
      "password",
      "api_key",
      "pass",
    ],
    "socket": {
      "bind": "127.0.0.1",
      "port": 3030
    }
  }
}

Setting a service password

Depending on how you do your configuration management, you may find this easy or hard, but the next step is to set a password inside the client for the service you’ll monitor. I make heavy use of the roles and profiles pattern with my Puppet config, so this is a simple as setting a password inside a particular client role. Normally I use hiera to do this, and I make use of sensu-puppet’s client_custom options to set up these passwords. It looks a little bit like this:

class { 'sensu':
  redact => [ 'password', 'pass', 'api_key' ]
  client_custom => {
    github => {
      password => 'correct-horse-battery-staple',
    },
  },
}

or with hiera:

sensu::redact
  - "password"
  - "pass"
  - "api_key"
sensu::client_custom:
  - sensu::client_custom:
  nexus:
    password: "correct-horse-battery-staple'

The resulting JSON, for those that use different config management systems, looks like this:

{
  "client": {
    "name": "client",
    "address": "192.168.4.21",
    "subscriptions": [
      "base"
    ],
    "safe_mode": false,
    "keepalive": {
      "handler": "opsgenie",
      "thresholds": {
        "warning": 45,
        "critical": 90
      }
    },
    "redact": [
      "password",
      "api_key",
      "pass",
    ],
    "socket": {
      "bind": "127.0.0.1",
      "port": 3030
    },
    "github": {
      "password": "correct-horse-battery-staple"
    }
  }
}

Note I’ve set up the service name (github) and then made a subkey of “password”. Because we set the “password” field to be redacted, we need to use a subkey, so that only the password is redacted.

When you look in your dashboard or in your logfiles, you’ll now see something like this:

Sensu Redaction

Using the password

Now, when I define the sensu check on the clients (with the same role, so it has access to the password, obviously) we use check command token substituion to make use of this field. This will essentially grab the correct value from the client config (which you defined earlier) and use that instead of the substitution.

sensu::check{ 'check_password_test':
  command      => '/usr/local/bin/check_password_test --password :::github.password::: ',
}

of course, with JSON, just make sure the command field uses the substitution:

{
  "checks": {
    "check_password_test": {
      "standalone": true,
      "command": "/usr/local/bin/check_password_test --password :::github.password:::",
    }
  }
}

Hopefully this clears up some questions about how exactly sensu redaction is used in the wild, as it’s fairly easy to implement and yet not a lot of people seem to do it!

Read the whole story
supine
3219 days ago
reply
An Aussie living in Frankfurt
Share this story
Delete

guapet: so my brother was telling me about this human resources certification he attended a while...

7 Comments and 26 Shares

guapet:

so my brother was telling me about this human resources certification he attended a while ago. in a panel, the panelist asked a bunch of people in attendance, “who here knows if an applicant for a job is right for it in under 60 seconds?”

hands shot up around the room, people smug about their ability to “weed out the riff-raff” when it came to hiring for their fortune 500.

“you should all be fired and probably in jail,” they said, waiting for the whole room to get uncomfortable, then continued, “because the only things you can really learn about a human being in under 60 seconds are all things that are fueled by prejudices and biases covered by american law. so now, i will teach you how to stop being racist, sexist, judgmental assholes and hire people that will better your company of employ.”

Read the whole story
supine
3289 days ago
reply
An Aussie living in Frankfurt
digdoug
3295 days ago
reply
Louisville, KY
Share this story
Delete
6 public comments
emdot
3295 days ago
reply
!
San Luis Obispo, CA
PaulPritchard
3295 days ago
reply
Well played
Belgium
jepler
3295 days ago
reply
nice
Earth, Sol system, Western spiral arm
MotherHydra
3295 days ago
reply
Schooled.
Space City, USA
skittone
3295 days ago
reply
Ha!
JayM
3295 days ago
:)
sirshannon
3295 days ago
reply
BAM.

Germany. Why. Can’t. You. Queue.

1 Comment

Germany, tell me something. I am on my knees, here, draped at your sensible shoe-clad feet. Why, for the love of God, do you struggle so fundamentally, so profoundly, with the notion of a tidy, fair queue? What is it about lining up for coffee, for bread, for the bloody bus, that sends you all into a primal spin, prepared to trample on each other, betray one another, shove and sidestep until all sense of civility, of humanity, has been sucked out of the air?

I saw James Bond last night. Two screenings were occurring within 15 minutes of each other; the OV and the German version. We arrived thirty minutes early to collect our reserved tickets, to find an enormous crowd milling about in the foyer. Because this is what Germans do, they mill.  They mill on footpaths, they mill on train stations, they mill in bakeries and coffee shops. In situations upon where, in England, or Australia, you would find a snake of people, with a clear leader who shall rightly receive service first, in Germany you find a crowd of hard, set faces, all quick to thrust a hand in the air and shriek, ‘zwei normale Brötchen’ before the server has had time to ask ‘wer bekommt?’ So the Germans were milling in the foyer, waiting for their cinema to open, like a swarm of beetles clutching plastic trays of corn chips and toxic cheese sauce. Upon purchasing our own corn chips and toxic cheese sauce, we asked if cinema 2 had opened yet, and were told it had. The beetles were waiting for cinema 1 to open.

Now, at this juncture, I ask you to imagine what could have been, had the cinema management requested all people join the appropriate queue for cinema 1 or cinema 2, thus leaving a thoroughfare between queues for general breathing space, and thus allowing people to not only know when their theatre had opened, but then be able to proceed in an orderly manner into the theatre upon doors opening. As it stood, the millers were blocking every inch of space from the entrance to the bar, and we had to brace and then push through seventy bodies, saying ‘entschuldigung’ ad nauseum because nobody moves. How hard is it? How hard is it to stand in a line, that enables everyone to move, breathe, and access where they have to go? Why do you all have to stand as some sort of giant, impenetrable human structure, which moves and acts as one? Are you all going to then begin a slow, en masse crawl into the cinema together? Do all seventy of you plan on easing through the doors, as one? What’s the plan here, Germany?

And it isn’t just at the movies. Have you ever seen what happens at a bus stop? With a couple of minutes before the bus pulls in, you step out and stand at approximately where it will brake and open its doors, and it’s just you. Only you. Perhaps you are the only one alighting this bus, you think. Super. You see it coming over the horizon, so you reach for your purse. Suddenly someone is at your right shoulder, and then someone else at their right shoulder. As the bus nears, a few people crawl out from the shadows and begin the patented German mill. Before you know it, as the bus pulls in, you are about ten people along in a two person deep crush, and everyone moves as one to the doors. Ditto with the trains. Everyone gets on and off at the same time, which means for a harrowing three seconds, you are locked in a weird sort of human chain before you all press past each other and in some miracle of science, twenty peple simultaneously disembark and board.

When I was a child, some school lunches came from the tuckshop, a handy little shop at school from which one could order hot or cold lunches, drinks, and snacks at designated times. Come recess or lunch, students would swarm to the tuckshop and immediately form a line. A line monitored by the teacher on tuckshop duty, to ensure orderliness and, above all, a modicum of fairness. You see, it was – and is – the gravest of insults to jump the queue. To push in. Pushing in was loudly called out, the perpetrator thoroughly shamed and moved to the back of the queue. Oh sure, people tried; there is the classic joining a friend who is near the front of the queue and ordering with them. Or the completely shamless going to the front of the queue to check the sandwich display and then seamlessly segueing into the queue and hoping no one will notice. Both of these things are appalling things to do to other people, who have been patiently and fairly waiting their turn. It just isn’t cricket. You don’t do it.

Queueing, at its very heart, is about fairness. You put in the waiting time, you take your turn, the service is fairly, evenly distributed and there is a sense of order and civil procession. I know, that if I am behind someone in a queue, it is because they got their before me, have thus been waiting longer, and should thus be served before me. Arriving at a busy cafe or a busy bakery isn’t an invitation to try and outsmart some poor dolt who has been waiting ten minutes longer than me, by methodically pushing forward until you are at the frontline, and then raising your hand like a whip, when asked ‘who is next’. Or sidling up to ‘check out the sandwiches’ and then just happening to be the ‘first in line’ and shamelessly placing your order head of the ten people waiting behind you. That is called being rude, and frankly, mean. It is one thing to form an impenetrable human structure and block an entrance, quite another to manoeuvre yourself into a position than enables you getting served before someone who has been waiting for twenty minutes behind you. The former is bloody irritating and senseless, the latter is wildly impolite. And Germans, you do both.

And here is the kicker; you are a nation known for your love of order. You even SAY ‘all is in order’ when English speakers would say ‘all is well.’ ‘Ordnung’ is the most used German word after ‘wurst‘. You love a process. You love efficiency. And yet you cannot form a line of people to save yourselves. It beggars belief. Sometimes I wonder if you are taking the piss; how is it possible you are so motivated by order and efficiency, but as a race, lose your collective shit when it comes to the most basic form of those very virtues?

So tell me, please. Why? Why. Can’t. You. Queue?

Image credit

Read the whole story
supine
3296 days ago
reply
.
An Aussie living in Frankfurt
Share this story
Delete
Next Page of Stories