Videology was recently featured in a technology case study on the Amazon Web Services blog, highlighting our innovative developments in managing advertising data processing in support of the Videology Platform. This article highlighted some work that, for many advertising and media clients, happens “behind the scenes.” But it is noteworthy for our entire audience because it shows how innovation happens at all layers of an ad tech company. We firmly believe that results are driven through all steps of the user experience, and we are constantly looking for ways to make our technology better. The stronger the foundation, the greater the client experience.
We caught up with three members of the Videology Engineering team – Joseph Julian (Sr. Director of Engineering), Paul Frederiksen (Principal DevOps Engineer), and Dave Ortiz (Sr. Software Engineer) – to discuss how this change is impacting Videology clients, and learn why they were asked to present the case at the Amazon Web Services re:Invent conference.
Tell us in a few sentences about the new Case Study published by Amazon Web Services.
Joseph: The case study discusses our migration to a system architecture resulting in improved scalability, reliability and cost optimization. To achieve this, we leveraged new storage released by Amazon Web Services (AWS) earlier in 2016.
Why was this a noteworthy case?
Dave: The storage solution we migrated to was not supported by our Hadoop vendor. We tested this migration in parallel to them, provided feedback on our experiences and were one of the first customers to migrate to it. The storage types we are using were announced shortly before our migration and AWS was interested in our use case to share with other customers.
Tell us about your presentation at AWS re:Invent conference.
Dave: The conference in general was amazing. For me, the best presentations exposed information to further improve a variety of systems/solutions we use. It gave us the chance toshare our story with the industry, which validated that something we worked on was worth showing off.
Paul: It was a good opportunity to share our use case with other customers. It allowed us to collaborate with AWS on the story, and discuss future opportunities for working with them.
How has this work with Amazon Web Services benefitted our customers?
Paul: This provides us with a more reliable, scalable and predictable solution. Since implementation, we’ve experienced fewer disrupts and realized faster delivery of our data.
Dave: The system is more powerful and requires fewer resources to handle the workload. This allows for future improvements through use of technologies previously unavailable to us due to system resource constraints. These technologies have potential to enhance our feature set and further improve our ability to deliver within our SLAs.
How does your team continuously work to find ways to increase time efficiency and cost efficiency for our customers?
Dave: With Big Data, we are always looking for ways to reduce processing time because doing so allows us to deliver reporting faster to our clients and feed data back into our optimization systems. We also constantly focus on eliminating data discrepancies – precision is very important to us, so we always strive for perfection. Finally, it helps us more efficiently integrate with our clients & 3rd parties.
Paul: From a DevOps perspective, we focus on simplifying and automating as much as we can to provide reliable, scalable and predictable systems and services. To do this, we allow our software teams to ship updates fast and often. We don’t want to be a bottleneck to delivering new features.