205 stories
·
0 followers

How We Built r/Place – Upvoted

1 Share

Brian Simpson, Matt Lee, & Daniel Ellis
(u/bsimpson, u/madlee, & u/daniel)

Each year for April Fools’, rather than a prank, we like to create a project that explores the way that humans interact at large scales. This year we came up with Place, a collaborative canvas on which a single user could only place a single tile every five minutes. This limitation de-emphasized the importance of the individual and necessitated the collaboration of many users in order to achieve complex creations. Each tile placed was relayed to observers in real-time.

Multiple engineering teams (frontend, backend, mobile) worked on the project and most of it was built using existing technology at Reddit. This post details how we approached building Place from a technical perspective.

But first, if you want to check out the code for yourself, you can find it here. And if you’re interested in working on projects like Place in the future, we’re hiring!

Requirements

Defining requirements for an April Fools’ project is extremely important because it will launch with zero ramp-up and be available immediately to all of Reddit’s users. If it doesn’t work perfectly out of the gate, it’s unlikely to attract enough users to make for an interesting experience.

  • The board must be 1000 tiles by 1000 tiles so it feels very large.
  • All clients must be kept in sync with the same view of the current board state, otherwise users with different versions of the board will have difficulty collaborating.
  • We should support at least 100,000 simultaneous users.
  • Users can place one tile every 5 minutes, so we must support an average update rate of 100,000 tiles per 5 minutes (333 updates/s).
  • The project must be designed in such a way that it’s unlikely to affect the rest of the site’s normal function even with very high traffic to r/place.
  • The configuration must be flexible in case there are unexpected bottlenecks or failures. This means that board size and tile cooldown should be adjustable on the fly in case data sizes are too large or update rates are too high.
  • The API should be generally open and transparent so the reddit community can build on it (bots, extensions, data collection, external visualizations, etc) if they choose to do so.

Backend

Implementation decisions

The main challenge for the backend was keeping all the clients in sync with the state of the board. Our solution was to initialize the client state by having it listen for real-time tile placements immediately and then make a request for the full board. The full board in the response could be a few seconds stale as long as we also had real-time placements starting from before it was generated. When the client received the full board it replayed all the real-time placements it received while waiting. All subsequent tile placements could be drawn to the board immediately as they were received.

For this scheme to work we needed the request for the full state of the board to be as fast as possible. Our initial approach was to store the full board in a single row in Cassandra and each request for the full board would read that entire row. The format for each column in the row was:

(x, y): {‘timestamp’: epochms, ‘author’: user_name, ‘color’: color}

Because the board contained 1 million tiles this meant that we had to read a row with 1 million columns. On our production cluster this read took up to 30 seconds, which was unacceptably slow and could have put excessive strain on Cassandra.

Our next approach was to store the full board in redis. We used a bitfield of 1 million 4 bit integers. Each 4 bit integer was able to encode a 4 bit color, and the x,y coordinates were determined by the offset (offset = x + 1000y) within the bitfield. We could read the entire board state by reading the entire bitfield. We were able to update individual tiles by updating the value of the bitfield at a specific offset (no need for locking or read/modify/write). We still needed to store the full details in Cassandra so that users could inspect individual tiles to see who placed them and when. We also planned on using Cassandra to restore the board in case of a redis failure. Reading the entire board from redis took less than 100ms, which was fast enough.

Illustration showing how colors were stored in redis, using a 2×2 board:

We were concerned about exceeding maximum read bandwidth on redis. If many clients connected or refreshed at once they would simultaneously request the full state of the board, all triggering reads from redis. Because the board was a shared global state the obvious solution was to use caching. We decided to cache at the CDN (Fastly) layer because it was simple to implement and it meant the cache was as close to clients as possible which would help response speed. Requests for the full state of the board were cached by Fastly with an expiration of 1 second. We also added the stale-while-revalidate cache control header option to prevent more requests from falling through than we wanted when the cached board expired. Fastly maintains around 33 POPs which do independent caching, so we expected to get at most 33 requests per second for the full board.

We used our websocket service to publish updates to all the clients. We’ve had success using it in production for reddit live threads with over 100,000 simultaneous viewers, live PM notifications, and other features. The websocket service has also been a cornerstone of our past April Fools projects such as The Button and Robin. For r/place, clients maintained a websocket connection to receive real-time tile placement updates.

API

Retrieve the full board

Requests first went to Fastly. If there was an unexpired copy of the board it would be returned immediately without hitting the reddit application servers. Otherwise, if there was a cache miss or the copy was too old, the reddit application would read the full board from redis and return that to Fastly to be cached and returned to the client.

Request rate and response time as measured by the reddit application:

Notice that the request rate never exceeds 33/s, meaning that the caching by Fastly was very effective at preventing most requests from hitting the reddit application.

When a request did hit the reddit application the read from redis was very fast.

Draw a tile

The steps for drawing a tile were:

  1. Read the timestamp of the user’s last tile placement from Cassandra. If it was more recent than the cooldown period (5 minutes) reject the draw attempt and return an error to the user.
  2. Write the tile details to redis and Cassandra.
  3. Write the current timestamp as the user’s last tile placement in Cassandra.
  4. Tell the websocket service to send a message to all connected clients with the new tile.

All reads and writes to Cassandra were done with consistency level QUORUM to ensure strong consistency.

We actually had a race condition here that allowed users to place multiple tiles at once. There was no locking around the steps 1-3 so simultaneous tile draw attempts could all pass the check at step 1 and then draw multiple tiles at step 2. It seems that some users discovered this error or had bots that didn’t gracefully follow the ratelimits so there were about 15,000 tiles drawn that abused this error (~0.09% of all tiles placed).

Request rate and response time as measured by the reddit application:

We experienced a maximum tile placement rate of almost 200/s. This was below our calculated maximum rate of 333/s (average of 100,000 users placing a tile every 5 minutes).

Get details of a single tile

Requests for individual tiles resulted in a read straight from Cassandra.

Request rate and response time as measured by the reddit application:

This endpoint was very popular. In addition to regular client requests, people wrote scrapers to retrieve the entire board one tile at a time. Since this endpoint wasn’t cached by the CDN, all requests ended up being served by the reddit application.

Response times for these requests were pretty fast and stable throughout the project.

Websockets

We don’t have isolated metrics for r/place’s effect on the websocket service, but we can estimate and subtract the baseline use from the values before the project started and after it ended.

Total connections to the websocket service:

The baseline before r/place began was around 20,000 connections and it peaked at 100,000 connections, so we probably had around 80,000 users connected to r/place at its peak.

Websocket service bandwidth:

At the peak of r/place the websocket service was transmitting over 4 gbps (150 Mbps per instance and 24 instances).

Frontend: Web and Mobile Clients

Building the frontend for Place involved many of the challenges for cross-platform app development. We wanted Place to be a seamless experience on all of our major platforms including desktop web, mobile web, iOS and Android.

The UI in place needed to do three important things:

  1. Display the state of the board in real time
  2. Facilitate user interaction with the board
  3. Work on all of our platforms, including our mobile apps

The main focus of the UI was the canvas, and the Canvas API was a perfect fit for it. We used a single 1000 x 1000 <canvas> element, drawing each tile as a single pixel.

Drawing the canvas

The canvas needed to represent the state of the board in real time. We needed to draw the state of the entire board when the page loaded, and draw updates to the board state that came in over websockets. There are generally three ways to go about updating a canvas element using the CanvasRenderingContext2D interface:

  1. Drawing an existing image onto the canvas using drawImage()
  2. Draw shapes with the various shape drawing methods, e.g. using fillRect() to fill a rectangle with a color
  3. Construct an ImageData object and paint it into the canvas using putImageData()

The first option wouldn’t work for us since since we didn’t already have the board in image form, leaving options 2 and 3. Updating individual tiles using fillRect() was very straightforward: when a websocket update comes in, just draw a 1 x 1 rectangle at the (x, y) position. This worked OK in general, but wasn’t great for drawing the initial state of the board. The putImageData() method was a much better fit for this, since we were able to define the color of each pixel in a single ImageData object and draw the whole canvas at once.

Drawing the initial state of the board

Using putImageData() requires defining the board state as a Uint8ClampedArray, where each value is an 8-bit unsigned integer clamped to 0-255. Each value represents a single color channel (red, green, blue, and alpha), and each pixel requires 4 items in the array. A 2 x 2 canvas would require a 16-byte array, with the first 4 bytes representing the top left pixel on the canvas, and the last 4 bytes representing the bottom right pixel.

Illustration showing how canvas pixels relate to their Uint8ClampedArray representation:

For place’s canvas, the array is 4 million bytes long, or 4MB.

On the backend, the board state is stored as a 4-bit bitfield. Each color is represented by a number between 0 and 15, allowing us to pack 2 pixels of color information into each byte. In order to use this on the client, we needed to do 3 things:

  1. Pull the binary data down to the client from our API
  2. “Unpack” the data
  3. Map the 4-bit colors to useable 32-bit colors

To pull down the binary data, we used the Fetch API in browsers that support it. For those that don’t, we fell back to a normal XMLHttpRequest with responseType set to “arraybuffer”.

The binary data we receive from the API contains 2 pixels of color data in each byte. The smallest TypedArray constructors we have allow us to work with binary data in 1-byte units. This is inconvenient for use on the client so the first thing we do is to “unpack” that data so it’s easier to work with. This process is straightforward, we just iterate over the packed data and split out the high and low order bits, copying them into separate bytes of another array. Finally, the 4-bit color values needed to be mapped to useable 32-bit colors.

API Response 0x47 0xE9
Unpacked 0x04 0x07 0x0E 0x09
Mapped to 32bit colors 0xFFA7D1FF 0xA06A42FF 0xCF6EE4FF 0x94E044FF

The ImageData structure needed to use the putImageData() method requires the end result to be readable as a Uint8ClampedArray with the color channel bytes in RGBA order. This meant we needed to do another round of “unpacking”, splitting each color into its component channel bytes and putting them into the correct index. Needing to do 4 writes per pixel was also inconvenient, but luckily there was another option.

TypedArray objects are essentially array views into ArrayBuffer instances, which actually represent the binary data. One neat thing about them is that multiple TypedArray instances can read and write to the same underlying ArrayBuffer instance. Instead of writing 4 values into an 8-bit array, we could write a single value into a 32-bit array!  Using a Uint32Array to write, we were able to easily update a tile’s color by updating a single array index. The only change required was that we had to store our color palette in reverse-byte order (ABGR) so that the bytes automatically fell in the correct position when read using the Uint8ClampedArray.

0 1 2 3
0xFFD1A7FF 0xFF426AA0 0xFFE46ECF 0xFF44E094
255 167 209 255 160 106 66 255 207 110 228 255 148 224 68 255
r g b a r g b a r g b a r g b a

Handling websocket updates

Using the drawRect() method was working OK for drawing individual pixel updates as they came in, but it had one major drawbacks: large bursts of updates coming in at the same time could cripple browser performance. We knew that updates to the board state would be very frequent, so we needed to address this issue.

Instead of redrawing the canvas immediately each time a websocket update came in, we wanted to be able to batch multiple websocket updates that come in around the same time and draw them all at once. We made two changes to do this:

  1. We stopped using drawRect() altogether, since we’d already figured out a nice convenient way of updating many pixels at once with putImageData()
  2. We moved the actual canvas drawing into a requestAnimationFrame loop

By moving the drawing into an animation loop, we were able to write websocket updates to the ArrayBuffer immediately and defer the actual drawing. All websocket updates in between frames (about 16ms) were batched into a single draw. Because we used requestAnimationFrame, this also meant that if draws took too long (longer than 16ms), only the refresh rate of the canvas would be affected (rather than crippling the entire browser).

Interacting with the Canvas

Equally importantly, the canvas needed to facilitate user interaction. The core way that users can interact with the canvas is to place tiles on it. Precisely drawing individual pixels at 100% scale would be extremely painful and error prone, so we also needed to be able to zoom in (a lot!). We also needed to be able to pan around the canvas easily, since it was too large to fit on most screens (especially when zoomed in).

Camera zoom

Users were only allowed to draw tiles once every 5 minutes, so misplaced tiles would be especially painful. We had to zoom in on the canvas enough that each tile would be a fairly large target for drawing. This was especially important for touch devices. We used a 40x scale for this, giving each tile a 40 x 40 target area. To apply the zoom, we wrapped the <canvas> element in a <div> that we applied a CSS transform: scale(40, 40) to. This worked great for placing tiles, but wasn’t ideal for viewing the board (especially on small screens), so we made this toggleable between two zoom levels: 40x for drawing, 4x for viewing.

Using CSS to scale up the canvas made it easy to keep the code that handled drawing the board separate from the code that handled scaling, but unfortunately this approach had some issues. When scaling up an image (or canvas), browsers default to algorithms that apply “smoothing” to the image. This works OK in some cases, but it completely ruins pixel art by turning it into a blurry mess. The good news it that there’s another CSS, image-rendering,  which allows us to ask browsers to not do that. The bad news is that not all browsers fully support that property.

Bad news blurs:

We needed another way to scale up the canvas for these browsers. I mentioned earlier on that there are generally three ways to go about drawing to a canvas. The first method, drawImage(), supports drawing an existing image or another canvas into a canvas. It also supports scaling that image up or down when drawing it, and though upscaling has the same blurring issue by default that upscaling in CSS has, this can be disabled in a more cross-browser compatible way by turning off the CanvasRenderingContext2D.imageSmoothingEnabled flag.

So the fix for our blurry canvas problem was to add another step to the rendering process. We introduced another <canvas> element, this one sized and positioned to fit across the container element (i.e. the viewable area of the board). After redrawing the canvas, we use drawImage() to draw the visible portion of it into this new display canvas at the proper scale. Since this extra step adds a little overhead to the rendering process, we only did this for browsers that don’t support the CSS image-rendering property.

Camera pan

The canvas is a fairly big image, especially when zoomed in, so we needed to provide ways of navigating it. To adjust the position of the canvas on the screen, we took a similar approach to what we did with scaling: we wrapped the <canvas> element in another <div> that we applied CSS transform: translate(x, y) to. Using a separate div made it easy to control the order that these transforms were applied to the canvas, which was important for preventing the camera from moving when toggling the zoom level.

We ended up supporting a variety of ways to adjust the camera position, including:

  • Click and drag
  • Click to move
  • Keyboard navigation

Each of these methods required a slightly different approach.

Click-and-drag

The primary way of navigating was click-and-drag (or touch-and-drag). We stored the x, y position of the mousedown event. On each mousemove event, we found the offset of the mouse position relative to that start position, then added that offset to the existing saved canvas offset. The camera position was updated immediately so that this form of navigation felt really responsive.

Click-to-move

We also allowed clicking on a tile to center that tile on the screen. To accomplish this, we had to keep track of the distance moved between the mousedown and mouseup events, in order to distinguish “clicks” from “drags”. If the mouse did not move enough to be considered a “drag”, we adjusted the camera position by the difference between the mouse position and the point at the center of the screen. Unlike click-and-drag movement, the camera position was updated with an easing function applied. Instead of setting the new position immediately, we saved it as a “target” position. Inside the animation loop (the same one used to redraw the canvas), we moved the current camera position closer to the target using an easing function. This prevented the camera move from feeling too jarring.

Keyboard navigation

We also supported navigating with the keyboard, using either the WASD keys or the arrow keys. The four direction keys controlled an internal movement vector. This vector defaulted to (0, 0) when no movement keys were down, and each of the direction keys added or subtracted 1 from either the x or y component of the vector when pressed. For example, pressing the “right” and “up” keys would set the movement vector to (1, -1). This movement vector was then used inside the animation loop to move the camera.

During the animation loop, a movement speed was calculated based on the current zoom level using the formula:

movementSpeed = maxZoom / currentZoom * speedMultiplier

This made keyboard navigation faster when zoomed out, which felt a lot more natural.

The movement vector is then normalized and multiplied by the movement speed, then applied to the current camera position. We normalized the vector to make sure diagonal movement was the same speed as orthogonal movement, which also helped it feel more natural. Finally, we applied the same kind of easing function to changes to the movement vector itself. This smoothed out changes in movement direction and speed, making the camera feel much more fluid and juicy.

Mobile app support

There were a couple of additional challenges to embedding the canvas in the mobile apps for iOS and Android. First, we needed to authenticate the user so they could place tiles. Unlike on the web, where authentication is session based, with the mobile apps we use OAuth. This means that the app needs to provide the webview with an access token for the currently logged in user. The safest way to do this was to inject the oauth authorization headers by making a javascript call from the app to the webview (this would’ve also allowed us to set other headers if needed). It was then a matter of passing the authorization headers along with each api call.

r.place.injectHeaders({‘Authorization’: ‘Bearer <access token>’});

For the iOS side we additionally implemented notification support when your next tile was ready to be placed on the canvas. Since tile placement occurred completely in the webview we needed to implement a callback to the native app. Fortunately with iOS 8 and higher this is possible with a simple javascript call:

webkit.messageHandlers.tilePlacedHandler.postMessage(this.cooldown / 1000);

The delegate method in the app then schedules a notification based on the cooldown timer that was passed in.

What We Learned

You’ll always miss something

Since we had planned everything out perfectly, we knew when we launched, nothing could possibly go wrong. We had load tested the frontend, load tested the backend, there was simply no way we humans could have made any other mistakes.

Right?

The launch went smoothly. Over the course of the morning, as the popularity of r/place went up, so did the number of connections and traffic to our websockets instances:

No big deal, and exactly what we expected. Strangely enough, we thought we were network-bound on those instances and figured we had a lot more headway. Looking at the CPU of the instances, however, painted a different picture:

Those are 8-core instances, so it was clear they were reaching their limits. Why were these boxes suddenly behaving so differently? We chalked it up to place being a much different workload type than they’d seen before. After all, these were lots of very tiny messages; we typically send out larger messages like live thread updates and notifications. We also usually don’t have that many people all receiving the same message, so a lot of things were different.

Still, no big deal, we figured we’d just scale it and call it a day. The on-call person doubled the number of instances and went to a doctor’s appointment, not a care in the world.

Then, this happened:

That graph may seem unassuming if it weren’t for the fact that it was for our production Rabbit MQ instance, which handles not only our websockets messages but basically everything that <a href="http://reddit.com" rel="nofollow">reddit.com</a> relies on. And it wasn’t happy; it wasn’t happy at all.

After a lot of investigating, hand-wringing, and instance upgrading, we narrowed down the problem to the management interface. It had always seemed kind of slow, and we realized that the rabbit diamond collector we use for getting our stats was querying it regularly. We believe that the additional exchanges created when launching new websockets instances, combined with the throughput of messages we were receiving on those exchanges, caused rabbit to buckle while trying to do bookkeeping to do queries for the admin interface. So we turned it off, and things got better.

We don’t like being in the dark, so we whipped up an artisanal, hand-crafted monitoring script to get us through the project:

$ cat s****y_diamond.sh

#!/bin/bash

/usr/sbin/rabbitmqctl list_queues | /usr/bin/awk '$2~/[0-9]/{print "servers.foo.bar.rabbit.rabbitmq.queues." $1 ".messages " $2 " " systime()}' | /bin/grep -v 'amq.gen' | /bin/nc 10.1.2.3 2013

If you’re wondering why we kept adjusting the timeouts on placing pixels, there you have it. We were trying to relieve pressure to keep the whole project running. This is also the reason why, during one period, some pixels were taking a long time to show up.

So unfortunately, despite what messages like this would have you believe:

The reasons for the adjustments were entirely technical. Although it was cool to watch r/place/new after making the change:

So maybe that was part of the motivation.

Bots Will Be Bots

We ran into one more slight hiccup at the end of the project. In general, one of our recurring problems is clients with bad retry behavior. A lot of clients, when faced with an error, will simply retry. And retry. And retry. This means whenever there is a hiccup on the site, it can often turn into a retry storm from some clients who have not been programmed to back-off in the case of trouble.

When we turned off place, the endpoints that a lot of bots were hitting started returning non-200s. Code like this wasn’t very nice. Thankfully, this was easy to block at the Fastly layer.

Creating Something More

This project could not have come together so successfully without a tremendous amount of teamwork. We’d like to thank u/gooeyblob, u/egonkasper, u/eggplanticarus, u/spladug, u/thephilthe, u/d3fect and everyone else who contributed to the r/place team, for making this April Fools’ experiment possible.

And as we mentioned before, if you’re interested in creating unique experiences for millions of users, check out our Careers page.


Want to discuss this blog post? Join the r/place team in the comments on r/programming.

AMP and React+Redux: Why Not?

At Reddit, we recently built alternate versions of some comments pages that use Accelerated Mobile Pages (AMP) technology — a technology designed by Google and others in the open source world to ensure that pages load instantly from search results on mobile devices. We have implemented it to improve the…

Ask an Admin Vol. 2: Pride, Inner-Office Politics and Annoying Laughs

Ask an Admin Vol. 2: Pride, Inner-Office Politics and Annoying Laughs

So, you’ve returned to Ask an Admin Part Deux, huh? Well, welcome back. For those unfamiliar, we had our first weekly AaA last friday, which you can find here. This week I’m answering a couple of questions about behind-the-scenes Reddit. I’ll do this every week. Feel free to ask anything,…

Alexis Ohanian Kicks off New Profile by Asking Users How Reddit Changed Their Lives

Alexis Ohanian Kicks off New Profile by Asking Users How Reddit Changed Their Lives

Yesterday, Reddit co-founder and "Snoo's dad" Alexis Ohanian unveiled the first public preview of a new profile experience on the site. As the originator of the feature, he was, naturally, one of its first three alpha testers, along with League of Legends creators Riot Games and artist Shitty Watercolour—but for his new profile's…

Read the whole story
kraymer
12 days ago
reply
Share this story
Delete

Giant meteorite sculpture is at the center of a stunning UK Holocaust Memorial proposal

1 Share

Anish Kapoor Holocaust Memorial

Anish Kapoor Holocaust Memorial

British sculptor Anish Kapoor and Zaha Hadid Architects have proposed a massive sculpture resembling a meteorite for the centerpiece of the UK Holocaust Memorial.

Meteorites, mountains and stones are often at the centre of places of reflection, especially in the Jewish tradition. They call on the vastness of nature to be a witness to our humanity. A memorial to the Holocaust must be contemplative and silent, such that it evokes our empathy. It must be a promise to future generations that this terrible chapter in human history can never occur again.

All ten shortlisted proposals can be viewed on the design competition site.

Tags: Anish Kapoor   architecture   art   Holocaust   UK   Zaha Hadid
Read the whole story
kraymer
13 days ago
reply
Share this story
Delete

Behind the Scenes of Mexico's Sinaloa Cartel

1 Share

Shortly after Joaquín "El Chapo" Guzmán escaped from Mexico's supermax Altiplano fortress-prison, a period when his Sinaloa Cartel seemed to have near-total impunity, Spanish documentarian David Beriain somehow arranged to embed for three months with the cartel. With 100,000 slain in the last decade, the scale of Mexico's drug war is on par with industrial warfare between nation-states, and the goal is largely the same: territorial control. Sinaloa holds power over the western Sierra Madre, a fertile agricultural region, as well as the desert underside of the U.S. border from Tíjuana to Ciudad Juárez, giving it a fully integrated supply chain directly plugged into the world's largest drug market. 

But as Beriain and his crew began traveling up the west coast of the country, interviewing street-level cartel employees at every stage of the drug trade, Mexican marines kicked El Chapo's door down in Los Mochis, setting in motion the drug lord's recapture and eventual extradition to the United States. The latest episodes of Beriain's documentary series, Clandestino, which aired in Spain on the channel DMAX (and is available in full on YouTube), give an unprecedented look inside the largest narcotics operation in the history of the world at a critical moment for the Sinaloa Cartel, when leadership is split between warring factions at the top.

For journalists, the only countries more dangerous than Mexico are Syria and Iraq, and many Mexican reporters have been killed, some for trivial reasons – like publishing an unflattering photo of a crooked politician looking fat. After yet another of its reporters was slain recently, a daily paper in Ciudad Juárez called El Norte announced that it would shut down, explaining to readers in a front-page editorial that it was simply too dangerous to continue reporting the news in Mexico. Beriain, who has previously interviewed Amazonian cocaine smugglers and tomb raiders in Peru, never discloses how he managed to ingratiate himself with a group so notoriously lethal to journalists. (In response to Rolling Stone's request for comment, an automatic reply said that he was in El Salvador, filming new episodes of his show on the resurgence of death squads there.) But in the course of the film, he is able to question farmers, chemists, cooks, drivers, boatmen, smugglers, gunrunners and hitmen on what they do, why it's done, how much they earn, and why they choose this work. 

His cast of darkly fascinating characters includes a murderous party boy born into the business; an icy female commander in stiletto heels who justifies her actions in feminist terms; and a cartel gunsmith who has come to loathe guns but would have his hands maimed if he tried to quit. The episodes on YouTube have racked up hundreds of thousands of views, and in the Spanish-speaking press Beriain himself has become a subject for interviews, which have focused on the risks he and his camera crew ran in placing themselves at the mercy of killers whose violent propensities occasionally flash forth on camera. "There were really tense moments in which anyone could have easily shot us," he told a Madrid radio station.

In one scene, Beriain goes on night patrol with masked hitmen who are cruising around the streets of Culiacán, the capital of Sinaloa state, looking for enemy incursions. A police car pulls in front of them with flashing sirens and a cop gets out. The hitmen lock and load their assault rifles. The cop comes around to the window.

"Listen... " the cop says.

The hitman in the passenger seat interrupts him: "We're working here, mister."

The cop grasps the situation. "On your way then," he says, stepping back.

"We are the ones in control," the hitman explains to Beriain. "Police, politicians. Here, everyone is in deep."

Beriain later sits down with a corrupt officer who, like all the interviewees, has his face blurred and his voice disguised. "I'm just trying to survive," he says. Thirteen or 14 of his colleagues in law enforcement have been killed for refusing to do the cartel's dirty work, he says.

At another point in the film, he and his cameraman go to a party in a graveyard full of gaudy tombs and mausoleums dedicated to slain capos. A band is playing tubas and trombones and flashy revelers stand around luxury cars, firing guns in the air. Here Beriain interviews a hitman called Junior, who is dangerously coked up and can't stop twisting and fidgeting, constantly drawing his chrome pistol to unload and reload it, standing up and then sitting down to snort more from a plastic baggie. Junior is lamenting at the unfairness of paying for a woman's plastic surgery only to see her move on to another man. "They're not loyal to you," Junior says. "That's why so many bitches in Culiacán turn up dead."

Beriain asks Junior why he has Osama bin Laden's face engraved on the handle of his pistol.

"The whole world knows that bin Laden never betrayed anyone," Junior says, ashing his cigarette. "And here in Culiacán, we respect that!" he exclaims. Apparently the cameraman found this funny, because Junior points at him and says, "Are you laughing?"

"No, no, not at all," says the cameraman.

"Tell me why are you laughing," Junior says, drawing the pistol with a jittery hand.

On camera, Beriain sits perfectly still.

"I was not laughing, really," the cameraman says. The fear in his voice is evident.

"Look, I'm going to tell you something," says Junior, charging the pistol and getting to his feet. "I engraved his face, and I'm going to see him in Hell."

He ends the interview by flinging his beer can into a swimming pool and shooting it. The partygoers on the dance floor barely react.

Past estimates of the number of hitmen – a rough translation of the Spanish sicario, a word that connotes an assassin and mercenary and member of an underground sect – in the Sinaloa Cartel have ranged from as low as 150 to as high as 150,000. Beriain learns that true number is 15,000, at least according to the commander of a paramilitary base where hitmen in camouflage uniforms mill around in skull masks and bizarre Halloween heads, toting a grim array of military-grade weaponry. This is an interesting revelation, but the intimate conversations Beriain has with individual hitmen are what set the documentary apart, and his meticulousness as an interviewer more than compensates for the somewhat unnecessary voiceovers and dramatic music.

"After killing so many people," says a hitman in a pink polo shirt who is said to have killed hundreds of people, "it turns into a vice. If you don't kill, you feel anxious to kill someone." It's hard to say whether the glint in the man's bulging eyes is depravity or immense psychic pain.

"Are you ready to lose your life?" Beriain asks another hitman, who is sitting under a tree wearing a black ski mask. "Of course," the hitman says. "What do you feel for your boss?" Beriain asks, referring to El Chapo. "Affection," the hitman says. "Loyalty."

In another scene in a warehouse Beriain notices a chair sitting on a sheet of plastic with a pair of handcuffs, rubber gloves, and a number of wicked-looking tools lying around. He asks a scowling hitman what they're for. "We use them to give certain punishments to people who don't observe our norms," the hitman says. 

Why the Sinaloa Cartel would allow a journalist to witness these scenes is a question that pervades the viewing experience, and is never satisfactorily answered. But it's clear that the cartel controlled everything Beriain saw and dictated who he could talk to and what he could film. The interviewees all speak reverentially of El Chapo and deny that his arrest destabilized the cartel, projecting an image of continuity and strength that may be misleading. Across Mexico many top bosses fell in 2016, several cartels splintered, several consolidated, and the nationwide conflict between gangster factions and the federal military has reached a new peak of violence this year. At one point in the documentary Beriain learns from a newspaper that a squad of gunmen has attacked the palatial house of El Chapo's elderly mother. Not long after that, El Chapo's flamboyant sons were briefly kidnapped in a bizarre raid at an expensive Puerto Vallarta restaurant known for its all-white interior. Beriain admits at one point that he doesn't know which side of the split he's on at any given moment, but the people he interviews are largely unaffected. When a boss falls and his top lieutenants turn on one other, the massive workforce beneath them keeps on as before. The cartel is distributed and modular and adaptable. In that regard, the smuggling techniques that Beriain documents – overland by car, on foot through the Sonoran Desert, and by commercial airliner – suggest one reason Beriain might have been allowed to film: The constant, widespread flow of small amounts of contraband by a variety of means can't be stopped, even if the authorities understand the methods perfectly. Drugs seep through the border like water through fabric.

In one scene, for instance, Beriain introduces us to a well-dressed woman code-named Samantha. On the table is a 400-gram oblong rod of shrink-wrapped heroin, a hollow latex phallus, a condom and a jar of petroleum jelly. "My work consists in carrying heroin from here in Culiacán to Los Angeles," she says, "in my vagina."

Beriain follows Samantha through the airport at a distance. The heroin is worth fifty grand. She gets paid $4,000. The flight costs $400. She figures she would get 20 to 25 years in prison if she were caught. Mid-flight, Beriain gets a text message informing him that aside from Samantha, there are two other drug couriers aboard. The camera pans the faces of the sleeping passengers. There is no way to known which two they are.

In an industrial garage near Tijuana, six kilos of heroin are handed off to a skinny kid wearing a facemask and latex gloves. He wipes the packages with rubbing alcohol before packing them in compartments beneath the seats of his car and fumigating the interior with a chemical to thwart drug-sniffing dogs. Beriain asks what would happen if he lost the merchandise.

"You cannot lose it," the smuggler says simply. "My best friend was robbed before crossing. He … he ended badly."

"The cartel killed him?"

"Yes. He had the worst kind of death. Tortured. Burned. Shot."

"Is it worth it?"

"There are necessities. I have a family. I don't do this for fun."

The smuggler bows his head and prays to Saint Judas, the Virgin Mary and El Malverde, the mythical bandit of Sinaloa. With a weary sigh he slams the rear hatch and they set out for the U.S. border. It's four in the morning as they approach what appears to be the Calexico/Mexicali crossing. The line of cars inches forward under harsh floodlights and surveillance cameras. If caught, the smuggler would face a minimum sentence of ten years and a maximum of life in prison.

"Hola," says the American guard, and then in English, with a Minnesota accent: "Where you going now?"

"To San Diego."

A pause as the guard glances over the car.

"Okay," he says. "Thank you very much."

The drop-off is in Lakeside, California, where the smuggler gets paid $6,500, a relatively paltry fee on a load worth $700,000 retail.

"But our work in the United States wasn't done," Beriain says. At the Mexican border, the contraband flows both north and south. The cartels need weapons to fight each other and the Mexican military, but guns are illegal in Mexico, where there is no firearms manufacturing whatsoever. In America, by contrast, the weapons industry is a hugely profitable and politically untouchable big business. The industry manufactures over five millions guns per year, and there are stockpiles everywhere. Two thousand firearms are illegally exported from the United States to Mexico per day, fueling the country's catastrophic conflict as much as the billions of dollars of demand created by the miserable failure that is drug prohibition.

At dawn in a parking garage, an off-camera seller hands over a small arsenal of bubble-wrapped assault rifles and boxes of high-caliber military ammunition new from the factory. Thunder rolls and lightning flashes as the smuggler's car, laden with weapons, crosses the border with no questions asked, barely even rolling to a stop.

"Drugs go up," Beriain says, "guns come down."

Find out what we know so far about the case against El Chapo.

Read the whole story
kraymer
15 days ago
reply
Share this story
Delete

A dialect coach demonstrates 12 different accents

1 Share

Sammi Grant is a dialect coach and voiceover artist for television and theater. In this video, she demonstrates her expertise in speaking English with several different accents, including Irish, Scottish, German, the American midwestern accent, and the Transatlantic accent, an accent invented to sound both American and British simultaneously.

No, really. That’s not a real accent. It’s a now-abandoned affectation from the period that saw the rise of matinee idols and Hitchcock’s blonde bombshells. Talk like that today and be the butt of jokes (see Frasier). But in the ’30s and ’40s, there are almost no films in which the characters don’t speak with this faux-British elocution-a hybrid of Britain’s Received Pronunciation and standard American English as it exists today. It’s called Mid-Atlantic English (not to be confused with local accents of the Eastern seaboard), a name that describes a birthplace halfway between Britain and America. Learned in aristocratic finishing schools or taught for use in theater to the Bergmans and Hepburns who were carefully groomed in the studio system, it was class for the masses, doled out through motion pictures.

This short video has some more examples of the Transatlantic (or Mid-Atlantic) accent:

More about...

Read the whole story
kraymer
18 days ago
reply
Share this story
Delete

One New Yorkers' Quest for the Perfect Amount of Noise

1 Share

As soon as the door slams, I slide to the floor in a cross-legged position and hold my breath. The room in which I have just barricaded myself looks a bit like Matilda’s chokey; a single light bulb casts a sickly yellow glow about the room, its walls lined with triangle-shaped chunks of fiberglass straining against wire mesh. In 15 minutes I will leave this room for the cacophonous world of Manhattan. I should, theoretically, be appreciating this small respite for what it is. Even so, with every second, I feel as if I’m going deeper underwater.

I am sitting in an anechoic chamber, the only one in New York City. Nestled in the hip, angled building of The Cooper Union for the Advancement of Science and Art, the anechoic chamber is where acoustics students, headed by the aptly-named Melody Baglione, conduct research—it’s the equivalent of a zero-gravity chamber, only in this case, the variable is sound. The room is designed to be as noise-free as possible; its chunky walls completely absorb reflections of sound waves, and insulate the space within from all exterior sources of noise. While the chamber is not exactly silent, per se—at 20 decibels, the ambient noise level is quieter than a whisper, but twice as loud as a pin drop— it’s almost certainly the quietest space in New York.

And the silence, as they say, is deafening. Sitting in here, it’s as if someone has turned the volume up inside of my head. I helplessly observe my mind as thoughts careen across it, stop in the middle and, after a brief, flailing, Wile E. Coyote-esque interval, plummet into the abyss. Desperate for distraction, I check my phone, crack my knuckles, make tiny coughing sounds. Each tiny rupture of quiet attains such specialness, such texture, you can practically touch it. It takes tremendous effort to be silent: yet that’s what I’m here to be. In a futile, self-defeating moment, I try to force myself to un-tense. “Just relax!” I scold myself. I am in what you might call a state of withdrawal: and, like it must be when withdrawing from anything at first, sobriety is deeply uncomfortable.

What am I detoxing from? Noise. I live in the East Village, which is very noisy—illegally noisy. Last year, Jackie Le and Matthew Palmer, acoustics engineering students at Cooper Union, decided to investigate the noise levels of the area near their school for their senior project. Le and Palmer went to various apartments around this neighborhood and, using a decibel meter, calculated the average level of volume coming in through the open windows of multiple apartments, and compared them with “safe” levels defined by New York City’s recently-revised noise code. “In every instance, we found the noise coming into these people’s apartments was above code,” Le says.

If noise is a drug, then it’s a performance drug.

I can vouch for this. I’ve spent this whole year telling anyone who will listen that the hundreds of nights I’ve spent trying to fall asleep in my apartment constitute a Sisyphean Hell of endurance: the iterating, irritating garbage trucks, the construction that starts at promptly 6 a.m. and continues into evening. I make a lot of noise about the noise, and I’m not the only one. Noise is the single greatest quality-of-life complaint New Yorkers have (we lodged 18,000 phone complaints with the Department of Environmental Protection last July alone). We all love to hate the noise. And yet sitting in silence, I do not feel as if I’ve found an escape from pain: I have simply traded it for a new variety. Shockingly, I realize I want to trade back.

In this city of complainers, who could admit to loving something so easy to complain about? Lewis Black, a comedian, couches his praise of noise in a cynical one-liner, noting dryly, “The reason I live in New York City is because it’s the loudest city on the planet Earth. It’s so loud I never have to listen to any of the shit that’s going on in my own head.”

Black might be on to something. Noise can cause us distress and pain, but it can also help us think, perceive, remember, and be more creative. It turns out that it’s even necessary for our physiological and mental functioning. If it’s a drug, then it’s a performance drug. And New York is full of addicts.

Though it’s counterintuitive, numerous experiments have demonstrated that the addition of noise can actually improve signal detection. This phenomenon, known as stochastic resonance, was first developed to describe the periodic nature of glacial climate change, and is thought to occur across many nonlinear dynamic systems—including the human brain.

A team led by Keiichi Kitajo, a researcher at RIKEN Brain Institute, first demonstrated this effect in vision. Noise coming into subjects’ left eyes increased their ability to detect a signal with their right. Since then, stochastic resonance has been observed at every level of the nervous system, from sensory receptors to neuronal networks. Researchers at the Wyss Institute at Harvard University have used vibrating insoles to add tactile noise to the soles of feet, improving tactile perception and balance. Auditory noise has been observed to enhance detection of an accompanying signal—which is known as “auditory stochastic resonance.”1

Auditory noise can heighten our other senses, too. Researchers have found that an “optimal amount” can make your fingers more sensitive to sensations, improve your ability to see contrast and even correct your posture (by enhancing “proprioceptive,” or positioning, signals).2 This is known as “cross-modal” stochastic resonance: Noise is a rising tide, lifting all signals. Cross-modal stochastic resonance can also improve memory, and higher-level cognitive processes such as judgment.3,4 It may even make us more ingenious.

It’s as if simply existing in this bustling landscape has helped me streamline and make sense of my own inner world.

In 2012, Ravi Mehta and a team of researchers at the University of British Columbia proposed that noise and the brain have a Goldilocksian relationship: Too much or too little impairs thought, but at moderate levels, when it’s “just right,” it makes us more creative. They subjected this hypothesis, as in the parable, to several tests.

First, the researchers created a noise smoothie from a blend of ambient sounds, including people talking in a cafeteria, vehicle traffic, and distant construction noise. Then they piped it into a room full of undergraduates at various volumes: low (50 db), moderate (70 db), and high (85 db). The students took a Remote Associates Test (or RAT) designed to measure creativity at each volume. The questions consisted of three or four stimulus word prompts, followed by a guess at what the target word is: For example, faced with “shelf,” “read,” and “end,” the correct response was “book.” The number of correct answers reflects the ability to think creatively and associatively.

Students working in the moderate noise level condition generated the most correct answers, an average of 1.5 more correct answers than in the low noise conditions, and 1.9 more than in the high noise condition. When participants were then asked to come up with creative ideas for a new mattress, low and medium noise participants both generated more ideas than high noise participants, but moderate noise participants’ ideas were rated as “more creative” than both low and high noise by a team of independent judges. The researchers made sure to control for the moderate increase of cortisol that can sometimes lead to increased productivity in the presence of noise.

What was going on, the researchers believed, was that the moderate level of noise induced processing disfluency, defined as the loss of “the subjective experience of ease or speed in processing information.” Processing disfluency is basically a measure of mental distance: When it exists, fixating, or thinking closely, becomes just difficult enough that the mind doesn’t clench around the particularities of an idea. Instead, it has a looser attitude, and can shift perspectives. The right amount of processing disfluency spurs creative thinking—thoughts with a perfect “creative” distance from their subject. Too much processing disfluency, however, and coherence is lost. This isn’t textbook stochastic resonance, because creativity can’t be boiled down to signal detection. But, just like textbook cases, an optimum exists, after which benefits fall away parabolically.

The optimal level for creative thinking, Mehta found, is 70 db—about the level of a crowded café.5 Or traffic in midtown Manhattan.

Dannielle Tegeder is a contemporary artist best known for her abstract blueprints of Utopian cities, and has had her work presented in over 100 gallery exhibitions. She has several pieces in the permanent collections of New York’s Museum of Modern Art, Chicago’s Museum of Contemporary Art, and The Weatherspoon Museum of Art in Greensboro. Her studio? It’s in the heart of Times Square.

“I don’t think you could get a louder and crazier place in all of New York City,” she says. However, when she attempts to just “get away from it all” she finds herself unable to work. “I go to a lot of artist residencies upstate, and I become completely immobilized when I’m there,” she says. “I find that it really brings my creativity level down, being in a bucolic environment. It’s just too calm.”

The focus of Tegeder’s work is city living, and “how pathways are made through pedestrians and traffic and subways.” New York’s noise is “exciting and stimulating,” she says, and it gives her something. “Weirdly, being in such a loud productive space in the middle of the city connects into my thinking and being. There’s a certain agitation level I get, because I commute during rush hour, but there’s still something I get from that. I don’t want to say it wakes you up, but it pushes you to think in a different way for sure.”

William Parker, a composer and double bassist living in the East Village and a prominent member of the New York City avant-garde jazz scene, believes the city’s noises are an integral part of his music. Growing up in the Bronx, he says, “I really started getting into listening to, not what I was focused on, but to peripheral sounds.” His relationship with noise deepened as his career took off. “Later on, as I started studying music, [peripheral noise] became very important to me.” (See Rooted in Sound.)

For Tegeder, Parker, and many of the city’s residents, the noise of New York is a comfort, an inspiration, even a declaration of identity. The city’s own noise code tips its hat to this fact, saying on its first page that it aims to balance health needs with New York’s “important reputation ... as a vibrant, world-class city that never sleeps.” The city’s noise pulls on the psyche, a jangling byproduct and energetic hum that feeds on itself.

As the city gets louder, it is also getting more creative. The Center for an Urban Future 2013 report “Creative New York” found that, “while traditional economic drivers like finance and legal services have stagnated in recent years, several creative industries have been among the fastest growing segments of the city’s economy.” Employment in film and television production increased 53 percent over the past decade. Architecture (33 percent growth), performing arts (26 percent), advertising (24 percent), visual arts (24 percent), and applied design (17 percent) all outpaced the city’s overall employment growth, which was 12 percent. Today the city is home to 14,145 creative businesses and nonprofits, up 18 percent from a decade ago; there are more Etsy sellers than yellow cab drivers.

All of which doesn’t mean that every creative type will benefit from the Big Apple’s din. The “just-right” level of noise will differ from one person to the next, following their levels of internal noise: the symphony (or cacophony) created by the interaction of organs, electrophysiological signals between skeletal muscles, and conversations between our neurons.6 We know that internal and external noises combine and compete in our bodies and minds, and this balance can tilt one way or the other. How can you tell if you’ll like it? It helps if you have ADHD.

If you’re a New Yorker who finds herself in the suburbs, white noise could be your nicotine patch.

People with ADHD often have low neural dopamine levels. This leads them to have memory and focus issues, and to seek excess external stimulus. Noise can have minor medicative effects for them; Göran Söderland and a team of researchers found that subjects with ADHD performed better on cognitive tasks under 81 db level of ambient noise (about the loudness of a garbage disposal), while control groups’ performance declined.7 “Participants with low dopamine levels (ADHD) require more noise for optimal cognitive performance compared to controls,” they write.

Again, there was a Goldilocks effect. “Strong, salient, irrelevant stimuli may easily disrupt concentration, leading to attentional problems, whereas an impoverished environment may be compensated for by hyperactivity. However, moderate levels of arousing stimuli may be beneficial for cognitive performance.” Söderland and his team believed this boils down to the behavior of dopamine, the neurochemical intimately tied into the mechanism of ADHD. By enhancing the difference between internal noise and external stimulation, it helps us separate meaningful external cues from meaningless internal neurological and chemical rumblings. In other words, our signal-to-noise ratio goes up.

In people with ADHD, dopamine is usually too low—until environmental stimuli come in. Then, dopamine goes haywire, flooding the synaptic cleft and drowning out the signal, then getting drawn back into the system, creating yet more noise and confusion. A moderate level of constant noise acts like the classic stochastic resonant system: A signal gets enhanced and stays there. Dopamine rises on the tide of noise, but gently, without flooding. ADHD medication changes this balance, and patients often report increased sensitivity to external noise.

New York may just be the perfect ADHD pill, dialing up the noise in the heads of individuals with low internal noise, and helping them dial down the chaos they feel. Highly creative people often suffer from ADHD, and I’ve been diagnosed with it. Ever since moving to New York, I’ve experienced an unprecedented balance of productivity and peace—it’s as if simply existing in this bustling landscape has helped me streamline and make sense of my own inner world.

Here’s the thing about noise, though— you miss it when it’s gone.

To live in New York means to get habituated to the noise of everyday life here. Researchers have seen the effect happen over time. As a neighborhood becomes more homogenous, and its residents sync their noise patterns, noise complaints tend to go down. This may explain why, controlling for other factors, gentrifying areas of the city display higher levels of noise complaints. City residents stop consciously recognizing noise as novel, and it becomes background, even if their bodies don’t always recognize it as such.

Arline Bronzaft, an environmental psychologist with a specialty in the effects of noise pollution, cautions that noise tolerance can be harmful. “People use the phrase, ‘I get used to it—I walk the streets and I get used to the noise,’ ” she told The New York Times in 2013. “It means you’ve adapted to the noise. When you’re dealing with something, you’re using energy to cope with the situation. Guess what? That’s wear and tear on your body. So when you hear someone say, ‘I’m dealing with it,’ I say, ‘Yes, but at what cost?’ ” All those jackhammers, sirens, and late-night garbage trucks are working their way into your teeth, your ears, your brain, and your heart. Studies conducted in industrial settings have long demonstrated relationships between noise exposure and cardiovascular disorders. In 2006, Dr. Hildegaard Niemann found that people exposed to neighborhood noise lived shorter lives, thanks to increased risk for heart disease, depression, migraines, and respiratory system problems.8

That’s not to mention hearing loss, which occurs after prolonged exposure to all sounds louder than 85 db, says Charles Shamoon, an assistant counsel at the DEP who co-authored the city’s revised noise code in 2007. These effects are further compounded by stress.

Sometimes, though, nothing is as loud as quiet. When he moved home after graduation, Matthew Palmer, the Cooper Union student whose project ratted out the entire East Village, found that he missed the noise that used to jar him, and had trouble sleeping. “Every time I come to visit my family, it feels way too quiet.” The experience is a common one for people who move to the suburbs from the city. They find it difficult to sleep; their brains, in the absence of the moderate external noise they’re used to, ramp up their internal noise, and become increasingly sensitive to the smallest sounds.

After the city, a quiet environment can produce “sensory underload,” a lowered level of stimulation that brings with it a higher level of noisy neural activity. In a miniature everyday version of this effect, people trying to get to sleep at night often report internal overstimulation, or “racing thoughts”—in lieu of external stimuli, it’s as if the noise in their head is dialed up, and thoughts all crowd in at once. Physicians often prescribe white noise—a moderate, constant external noise source—to help calm racing thoughts. If you’re a New Yorker who finds herself in the suburbs, white noise could be your nicotine patch. Or, if you’re back in the city, just crack open a window.

In the essay collection Goodbye to All That, a group of ex-New Yorkers reflect on what initially drew them to the city—“the crush of subway crowds, the streets filled with manic energy, and the certainty that this is the only place on Earth where one can become exactly who she is meant to be”—and the subsequent need to leave.9 Once they do leave, each of them finds a residual ache, a withdrawal effect. Nowhere is this nostalgia more palpable than in Joan Didion’s description of her walk home from work, which she presents to us as a baffling bouquet of sensory noise: “I could taste the peach and feel the soft air blowing from a subway grating on my legs and I could smell lilac and garbage and expensive perfume and I knew that it would cost something sooner or later ...”

As for myself, I found the brief respite from the city in the anechoic chamber invigorating. As I leave the chamber, I feel my vision sharpen. Every part of my body feels more alert, but still relaxed. I tell Baglione that I feel like I’ve just been in a spa. “I could do this every day,” I say. She looks at me like I’m crazy. Going cold turkey was hard for a bit. But then the fuzz inside of my brain smoothed out. After about 5 minutes, I relaxed. I realized I could hear my blood circulating throughout my body. My thoughts slowed, reaching solid conclusions.

Outside Cooper Union, the city noise assaults me once more; there’s a drill placed uncannily close to the school’s entrance, and a man is jackhammering away. I reach into my backpack pocket and pull out Baglione’s parting gift: a pair of earplugs. I spend the rest of the day walking around with them stuffed inside my ears. I only take them out once—when I see a man playing a painted piano in Washington Square Park, his fingers pounding the keys.

Susie Neilson is an Editorial Fellow at Nautilus.

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox!

References

1. Ward, L.M., MacLean, S.E., & Kirschner, A. Stochastic resonance modulates neural synchronization within and between cortical sources. PLoS One 5, e14371 (2010).

2. Lugo, E., Doti, R., & Faubert, J. Ubiquitous crossmodal stochastic resonance in humans: Auditory noise facilitates tactile, visual, and proprioceptive sensations. PLoS One 3, e2860 (2008).

3. Herweg, N.A. & Bunzeck, N. Differential effects of white noise in cognitive and perceptual tasks. Frontiers in Psychology 6, 1639 (2015).

4. Usher, M. & Feingold, M. Stochastic resonance in the speed of memory retrieval. Biological Cybernetics 83, L11-L16 (2000).

5. Mehta, R., Zhu, R., & Cheema, A. Is noise always bad? Exploring the effects of ambient noise on creative cognition. Journal of Consumer Research 39, 784-799 (2012).

6. Faisal, A.A., Selen, L.P.J., & Wolpert, D.M. Noise in the nervous system. Nature Reviews Neuroscience 9, 292-303 (2008).

7. Söderlund, G., Sikström, S., & Smart, A. Listen to the noise: Noise is beneficial for cognitive performance in ADHD. Journal of Child Psychology and Psychiatry 48, 840- 847 (2007).

8. Baughman, B. Noise pollution hard on heart as well as ears. NPR (2011).

9. Botton, S. (Ed.) Goodbye to All That Hachette Book Group, New York, NY (2013).

This article was originally published in our “Noise” issue in July, 2016.

Read the whole story
kraymer
22 days ago
reply
Share this story
Delete

A Gigantic Nazi City That Was Never Built

1 Share
Comments
Read the whole story
kraymer
23 days ago
reply
Share this story
Delete
Next Page of Stories