All Posts in Technology

Live Event Transformers

HOW WE GOT CHUCKED INTO THE DEEP END WITH ONLY 3 WEEKS TO VIRTUALIZE A 23,000 PERSON DIGITAL MARKETING EVENT.

 

This is the first in a series of articles, written by Wrecking Ball SVP & General Manager Bob Donlon, on the subject of shaping the future of virtual events and conferences.

In March 2020 the “excrement had hit the air conditioning”. My previous employer, and current client Adobe, announced that due to Covid-19 its annual digital marketing conference — Adobe SUMMIT — had been cancelled. It was to be a 23,000 person live event in Las Vegas and poof it was gone. The next thing we knew, the C-suite announced a decision to pivot the whole thing to an online virtual event, with the 5 of the keynotes and 125 of the breakout sessions that had been planned for the in-person event, and to keep the existing timeframe intact. Meaning my team and I at Wrecking Ball, along with our colleagues within Adobe, had three weeks to launch a virtual event involving six keynotes and a couple hundred breakout sessions when nobody involved was allowed to leave their homes. I could hear people screaming in San Jose which is forty five miles south from here.

In what now feels like the blink of an eye, video gear was sourced and shipped by the Adobe Studio team to the homes of the C-level keynote presenters and those delivering the breakout sessions. Remote sessions were set up to coach them on the technical and performance aspects of recording their content. On-the-fly workflows were developed to facilitate panel discussions (no disrespect to Zoom, but that just wasn’t gonna cut it for this). In parallel, my team took on the design, development, and deployment of a set of new components for the custom-built video publishing platform that we had originally built during the Adobe TV days (and has been in continuous use by the company ever since). We had to integrate everything within a custom front-end being built by a different team, and factor in a complex set of business requirements covering discovery, delivery, analytics, and follow-up.

It was the first event of this magnitude to be pulled off at the onset of the pandemic. We reached hundreds of thousands of unique viewers in the first week alone. People engaged with the content across the board, at a much greater order of magnitude than would’ve been the case in Vegas. KPI’s were blown out of the water. Even though I wish we’d had the time to virtualize other aspects of the conference such as the exhibit floor and socializing/networking (maybe virtual craps tables?) it was a pretty damn good place to start.

This hadn’t been my first rodeo — in 2010 I was asked by the leadership team at Adobe to help create the first virtual event in the company’s history. The task was to present the launch event for Creative Suite 5 on Adobe TV, an online video platform that I had conceived of and built, along with my cohorts at Adobe and a relatively new company called Wrecking Ball (where I work today). This would be the first time in the company’s history that there would be no in-person launch event, as Adobe TV had grown to such an extent that it reached a far greater audience than anything that could be achieved face-to-face.

To be completely honest, there was one more impetus to this endeavor . . . a massive failure that had taken place during a previous in-person launch event in 2009. I had played a role in that one as a keynote presenter — thankfully my own portion went off without a hitch (if it had not, please feel free to visualize my ass being kicked down West Broadway by my boss at the time, seen in the picture below).

Adobe SVP John Loiacono (left) and me (right) delivering part of the keynote at an Adobe Creative Suite launch event in 2009. (image credit: DV Magazine)

Earlier on in the keynote, my colleague Greg Rewis was onstage demonstrating the new version of Dreamweaver when all of a sudden the screens went dark. In front of a live audience of several hundred, as well as lord knows how many others tuning in via a live stream, the whole thing ground to a halt. Greg tried to fill the dead air by cracking jokes and an “unscheduled intermission” was abruptly announced. “Be right back, folks!”

Thirty minutes later the issue was fixed, but by then we had lost most of the webcast audience, and the five hundred or so people in the room were well in to the free hard liquor on offer behind the seating area. To be honest what happened next is somewhat of a blur, but I’m pretty sure when I stepped onto the stage an empty bottle of Jack Daniels came flying at me from the seats.

The point I’m trying to make here is that there were multiple factors behind the decision to pivot the 2010 Creative Suite 5 launch event from live to virtual. For sure, we had built a successful, proven platform — Adobe TV — in which to pull it off. For ABSOLUTELY sure we could no-way have anything resembling a repeat-performance of that unscheduled intermission in 2009 (incidentally, the culprit behind that turned out to be a cable that overloaded and fried somewhere on the A/V side of the house).

The latter was solved by recording the keynotes in advance, which was the easy part as this would involve much more than just a simulated live stream of a pre-recorded keynote. We needed to build awareness and buzz around it, provide an easy means for attendees to register (and thus capture the highly-prized leads coveted by our sales and marketing organization), produce and deliver the event itself, create an instant post-event on-ramp to our vast library of product demos, and provide a quick and easy path for attendees to download trials of and purchase the software.

Remember, this was ten years ago and nothing like this had been done on this scale 100% online before. There were months of research, development, and testing in order to pull together the pieces to make it happen (read a very brief case study here if it interests you). Today we take a lot of this for granted, but at the time it was a landmark event — a major success — we achieved the highest ever amount of revenue booked for Adobe in a single day

So here we are, chucked head-first, right back into the swimming pool of virtual events and conferences. This is on for real. But I tell you what, the timing couldn’t be better — the possibilities in how to transform this space are endless. Every day I’m having conversations with current and potential clients in organizations of all shapes and sizes about this transformation — ideas fly around faster than an empty Jack Daniels bottle sailing towards my head.

There’s one thing we all agree on at this point: there is no “if” anymore, there is only “how”. How do we define success in this space? Where should we “place our bets”?

First we need to understand the motivations of attendees. What do they hope to gain, and what are they willing to sacrifice in terms of time and money to achieve those gains? Upon what basis do they deem a live event a success or failure? In my next article, I’ll dive in to those motivations and success factors . . . until then, please enjoy this very brief “scheduled intermission”.


Thanks for reading! The Wrecking Ball team is always happy to have informed conversations on the topics of virtual events/conferences, video production/platforms, or digital marketing in general.

Self Compiling Go Docker Container

Todd Rafferty - Senior Software Architect This blog post was written by Todd Rafferty, Senior Software Architect at Wrecking Ball Studio + Labs. Todd has over 15 years experience in Software Engineering and has been working on bringing Docker and Go to our standard Wrecker toolbox.

Imagine a self contained development environment that could detect that there's a file change on my file system, kill an existing Go binary, rebuild the Go binary, and then, launch a new process.

INTRODUCTION

Setting up a docker container that self compiles my Go source upon changes, within a local development environment, helped myself and colleagues iterate faster. I am a remote engineer with a mix of other disciplines on my team that are new to the language. The goal was to make a reproducible, development environment that was extremely productive for both ends of the spectrum. Enter docker. With docker and docker-compose, I can build, tear down, and recreate the entire development environment with single command, but could it be smarter? Could it be used to automate the tedious? Could it be reproduced across other developer machines?

After learning docker and trying various approaches, I came across Reflex which wraps fsnotify. My approach was to get things working outside of the container first to understand how everything works, then move the pieces into a container and get it working. Outside of the container, I could get reflex to listen to file changes on any Go files within a directory. However, within a container I ran into limitations. I determined that it would be more performant to listen to the changes of 1 file than the entire sub-directories of potential matches.

So, what we'll need as I guide you through this:

  • Some Linux knowledge. Familiarity with bash scripting.
  • Docker experience. Familiarity with the docker and docker-compose tools.
  • New docker beta. This is important because it's using the native virtualization engine of the operating system instead of relying on virtualbox. Just trust me. It's faster this way.
  • Understanding of Go. Familiarity with Go environment, compiling, and coding.
  • golang on dockerhub
  • Reflex which uses fsnotify internally.
  • Please note, I’ve tested this on OSX. I haven’t tested this on linux / windows, sorry!

Please note, everything you're about to read is for local development environments. This isn't meant to be a deployment strategy or for production usage.

BASE CONTAINER SETUP

First, we need a Go environment within a docker container. Fortunately, there's one already available to us on dockerhub. For this post, we'll be using the alpine distro because it's super small, but there is a debian based one available as well. There are no changes needed to switch between distros. Within the docker container, the $GOPATH is `/go` which means the Go environment is right on the root path of the server.

We need more on this container though because while it has the Go environment on it, it doesn't have everything we need to watch for file changes within our project. This is where reflex comes in. Reflex is a small program that is written in Go that will notice changes on our local file system and kick off a shell script within the docker container for us.

Base Dockerfile:


# Pull the golang version.
FROM golang:1.7-alpine
ENV GOBINARIES /go/bin
# Fix the DNS issue, this happens at raff's house.
RUN echo 'hosts: files [NOTFOUND=return] dns' >> /etc/nsswitch.conf
# Setup reflex env
ENV REFLEXURL http://s3.amazonaws.com/wbm-raff/bin/reflex
ENV REFLEXSHA dee8f77fac8c873c709117df6ebe4467fc9f57ed3339105d308f787e9b94059c
# Install reflex
WORKDIR $GOBINARIES
RUN wget -q "$REFLEXURL" -O reflex &&\
    echo "$REFLEXSHA reflex" | sha256sum -c &&\
    chmod +x /go/bin/reflex

Here is a brief explanation of what's going on in this Dockerfile. We're pulling `golang:1.7-alpine`, downloading a pre-built version of reflex. We’re avoiding building reflex on the container itself to make sure we have a reproducible environment and to avoid `go get` issues.

This is a pretty good base image. Each project we work on is probably going to be different in terms of path and configuration. My recommendation is to keep this base image lean so you can use this for different project configurations. The above has already been provided for you on [dockerhub].

BUILDING ON TOP OF THE BASE IMAGE

We did a lot of simplification above, and that base image has been built, tagged, and pushed to dockerhub. The above dockerfile is documented in case you want to make modifications to the base image and put it within your own docker hub environment.


FROM wbsl/go:1.7
# APP SPECIFIC ENV
ENV BUILDPATH /go/src/github.com/WreckingBallStudioLabs/SelfCompilingExample
ENV TOOLS /go/_tools
ENV PORT 8080
# DOCKER / APP PORT
EXPOSE $PORT
# Make directories and add files as needed
RUN mkdir -p $TOOLS
ADD build.sh $TOOLS
ADD reflex.conf $TOOLS
RUN chmod +x $TOOLS/build.sh
# Execute reflex.
WORKDIR $BUILDPATH
CMD ["reflex","-c","/go/_tools/reflex.conf"]

Breaking down the above dockerfile, we’re pulling the base image `FROM wbsl/go:1.7rc6` that was created above. We're setting environment variables and setting a port to 8080. Creating a `/go/_tools` directory and then adding our `build.sh` and `reflex.conf` to that directory. So, let's pause here for a second. This entire environment depends on reflex kicking off a build script for us.

Here's the content of `build.sh`:


#!/bin/sh
set -e
echo "[build.sh:building binary]"
cd $BUILDPATH && go build -o /servicebin && rm -rf /tmp/*
echo "[build.sh:launching binary]"
/servicebin

`build.sh` is removing a previous binary, if there is one. It's going to do a change directory into the `$BUILDPATH` defined in the environment/dockerfile. It's going to `go build -o` another binary into our `$BUILDPATH`, clean-up the `/tmp` directory afterwards to keep the container size down, and finally, it's going to execute the binary.

Let's take a quick look at `reflex.conf` file that reflex is going to use as a configuration.


-sr '\.build$' -- sh -c '/go/_tools/build.sh'

Reflex is going to run as a service, and if a file named `.build` has changed, run the `build.sh` script. We're very close to starting this up. We're just missing a sample Go file to modify.

Basic `main.go` example:


package main

import (
        "fmt"
        "log"
        "net/http"
        "os"
)

func handler(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "Hello World!")
}

func main() {
        // Get the port from the OS ENV.
        port := ":" + os.Getenv("PORT")
        http.HandleFunc("/", handler)
        log.Printf("\nApplication now listening on %v\n", port)
        http.ListenAndServe(port, nil)
}

So, we're done setting up the necessary scripts and configurations for the development environment within the container. There’s only one thing left to do and that’s create a post save hook outside of this container.

POST SAVE HOOK

Our editor needs to have a post save hook and most editors have one. Sublime Editor has sublime-hooks, Vim users can use auto run upon save. I personally use Atom Editor with a plugin called on-save. The on-save requires me to have a file in the root of the project named `.on-save.json` with the following content:


[
  {
    "srcDir": ".",
    "destDir": ".",
    "files": "**/*.go",
    "command": "echo $(date) - ${srcFile} > .build"
  }
]

So, `srcDir` / `destDir` - I pretty much ignore and set to the current directory. `files` tells it to listen to listen to save changes made on `*.go` files. If a `*.go` file is changed, it kicks off a shell command:


echo $(date) - ${srcFile} > .build

Which is basically writing the current date and the file changed (e.g. `Wed Aug 17 13:35:20 EDT 2016 - main.go`) in a file named `.build`.

Something worth noting at this point. If you'd rather manually control the reloading of the Go building / relaunching, there's nothing in the process stopping you from deciding when the container is going to rebuild everything internally. Perhaps you’re ok with bringing down the environment and bringing it back up to rebuild. Find what works best for you.

BRINGING UP THE ENVIRONMENT

I have a sample `docker-compose.yml` in the project that will get you up and running pretty quickly. Again, I want to be as close to real world scenario as possible and that means that I may have an API server, a database server, memcache, etc. Docker Compose allows us to describe an environment and get it up pretty quickly.

With a terminal open, and the working directly in the root of the project, let’s launch the environment by typing `docker-compose up`. Here’s an animated gif showing what we should be seeing.

Animated Gif Displaying Docker-Compose up

The terminal is on the right hand side, the environment is coming up. Within the environment, it runs `build.sh` which removes the previous Go binary, rebuilds it, and relaunches it. On the left, a change is made within Atom. The post save hook kicks in when the file is saved which creates `.build`. Back on the right, the environment running reflex detects the change to `.build` and kicks off `build.sh` which removes the previous Go binary, rebuilds it, and relaunches it.

We're ready to build some awesome stuff now. 🙂

CONCLUSION

I believe maintaining a reproducible, local developer environment across team members is critical. However, if you can’t update the local environment as features and fixes are available, across the team with multiple disciplines, then having a docker container that self builds the environment could be an effective solution that saves your team time. In some cases, developers on the team might not have the expertise to update their local environment properly or may need to try a version of code and then roll it back. Finding a solution to solve this problem is important, especially when the diversity of disciplines on the team increases over time.

Project notes:

I'd like to thank: