Our world of technology is rapidly changing. We can compute more, faster. We can analyze incredibly large datasets efficiently and quickly. We can rapidly build applications for devices that communicate with each other and the greater world in real-time.
This year's GOTO Chicago conference is all about What Works: Which technologies and methodologies you need to know now, what the future of software will bring, how to better build more inclusive and collaborative teams, and where to start with all of this learning!
Cloud Native means building apps designed specifically to leverage cloud computing, often defined as container-packaged, dynamically managed and microservices-oriented. This architectural pattern allows systems to be self-healing, auto-scaling and highly available.
Organizations can radically reduce costs leveraging the efficiency of cloud computing when using Cloud Native technologies. But which problems will adopting Cloud Native solutions actually solve? Where do we start and how can we best strategize for a successful Cloud Native journey?
Automation has improved productivity across entire sectors. Software has driven much of this automation, but many workflows still require decisions by humans.
The promise of machine learning is to automate the decision-making process by training algorithms, based on empirical evidence. That promise is becoming very real and tangible for developers who are now able to leverage massive amounts of data with cloud computing power via learning libraries like TensorFlow and frameworks like MXNet.
How can today's engineers take advantage of modern learning methods? What are the main ideas and pitfalls when trying to automate decisions? How can organizations harness the power of machine learning to power their business?
Microservices promise faster development, deployments, scaling and all the goodies you always wanted but never had. It’s all about outcomes and the way your organisation is structured has a tremendous impact on those outcomes. It’s easy to say “Conway’s Law” and then move swiftly on but that’s not enough. Yes, a core characteristic of organizations successfully running microservices is that teams are organized around business capabilities but there is so much more to discuss: How do we define a microservice? What does a microservices architecture require? How do we withhold or even increase you current level of security when moving to a fine-grained distributed architecture?
But microservices themselves are the easy part. The really difficult choices revolve around everything that surrounds the microservices systems as they are designed, built, run, managed, evolved, stressed and even retired in production. We must consider how to manage and optimize the processes, practices, people and technologies when migrating from a monolithic system to a Microservice Architecture.
Tools won't fix your broken culture. And neither will hiring "a DevOps".
No matter how technical the job is, we still have to interact with other humans. Even after our Agile or Lean implementation or radical DevOps initiative is running, we have to be able to work effectively with people from different disciplines and backgrounds if we are to create truly high-performing teams.
We’re witnessing a shift from hierarchical ‘command and control’ management systems to flatter systems with more individual autonomy and yet despite their benefits, these systems present new and unique challenges. We need to understand the importance of inclusion, diversity, psychology and all facets of working together.
Choosing a programming language is one of the most crucial decision when developing software - the choice can influence the way you and your team think about your problem domain and how you model it.
As developers, we need to be aware of the languages topping the hype curve and focus on the production-ready ones that provide real functionality. We also need to understand the exciting updates to older languages like Java and how ones like C++ are still incredibly important in a newer era.
You’re using containers and CI/CD pipelines so you’re done right? Wrong!
DevOps promises to deliver better software quicker with shorter development cycles, increased deployment frequency, and more dependable releases. The Three Ways of DevOps - systems thinking, amplifying feedback loops and creating a culture of continual experimentation and learning - provide a solid framework but DevOps is more than just tools and techniques. You can’t simply buy it or adopt it and without significant culture shift, you can’t just hire it.
If you build it, you run it - but which skills and tools do you actually need to make the transition and run DevOps successfully? And how do we incorporate emerging ideas like Cloud Native, Serverless and Chaos Engineering as the next steps to enable more rapid iteration and better cooperation?
Serverless will revolutionize the way we write backend.
As a natural next step from Cloud's "Not on my machine" mantra, serverless applications offer more cost savings over bare-metal solutions through improved optimization of infrastructure resources. But how do we actually build serverless apps and run them successfully in production?
50 years ago, quantum computing was just a theory. Today, quantum programming is getting close to becoming the new reality for software developers.
Quantum computers have the potential for disrupting how we fundamentally store, process and utilize data and could provide significant breakthroughs in the optimization of complex systems, artificial intelligence and many other areas. Universities around the world are investing heavilty in quantum computer research. Companies like Google, IBM, Microsoft and Rigetti Computing are making it possible for the rest of us to actually leverage quantum processors in the cloud.
So, how we can we, as developers, get started with quantum programming? And what kinds of problems will quantum computers actually solve?
On average, it takes a company 8 months to figure out it has been hacked. In a world where innovation and deployment is expected at an ever-increasing pace, security is often neglected. Security requires time, and this time is often not prioritized imposing a challenge when new vulnerabilities are discovered and exposed every day.
As developers, how do we build inherently secure and maintainable code and infrastructure to protect our data and identities? How do we equip ourselves with tools to withstand intrusive and adversarial attacks and prepare for unforeseen security risks?
Everything nowadays seems to be “smart” and connected but are we prepared for the Internet of Things to be the new normal?
IoT no longer just means controlling a device from your smartphone. IoT equals ecosystems – some isolated, some connected, and some ready to be connected in a near future. As more devices communicate with cloud-based systems, our world should be getting smarter but more connected devices also creates new unforeseen risks, with security being the most prevalent.
We know that agility means working together, with customer-focus and short feedback loops. We also know the methodologies to choose between and that there are cultural and personal issues related to making this work. But often, when a company introduces agility, they forget to support the developers in their day-to-day job. How do they actually implement CD, pair programming, testing and architecture in agile development?
The next generation of Agile may not be called Agile at all - it will just be implicit that agility is part of software development. So, what do we need to get there? What can we do now to move beyond the existing methods and give our agility a boost?
"Chaos engineering involves running thoughtful planned experiments which teach us how our systems behave in the face of failure. These experiments start with a hypothesis about how things will behave, involve measuring the impact at each step, and result in better understanding of the how the system behaves under duress. From this, we can decide what actions to take to strengthen or mitigate the issue." - Kolton Andrus, Founder and CEO at Gremlin Inc.
Organizations struggle to scale critical applications every day. Users expect reliability, high availability, and extraordinarily rich user experiences across a wide variety of device and network conditions. But with increased traffic volume and data demands, these applications can become slow, inconsistent or just not work.
Scaling isn’t just about handling more users – it’s about managing risk and ensuring availability. What technical decisions do we need to make ensure that we can grow from tens of users to hundreds and thousands and provide the same expected quality? What current and emerging architectures, practices and solutions can we leverage to achieve predictable performance and scalability?
“Continuous Delivery is the ability to get changes of all types - including new features, configuration changes, bug fixes and experiments - into production, or into the hands of users, safely and quickly in a sustainable way. Our goal is to make deployments - whether of a large-scale distributed system, a complex production environment, an embedded system, or an app - predictable, routine affairs that can be performed on demand."
– Jez Humble, co-author of the Jolt Award winning book Continuous Delivery
"Continuous Delivery is rooted in the ideas of the scientific method. We are trying to allow developers, teams and organisations to work in a more experimental way. That means making small changes, observing the results and adapting to what we learn. So we optimise our development process for fast, efficient, high-quality feedback. This allows us to steer our software projects more effectively and so create higher-quality software faster. Adopt an experimental approach to making change, think in terms of “how can I measure that to understand if it works, or not”. Apply this thinking to EVERYTHING that you do, org-structure, team org, technical things, operations, requirements - Everything."
– Dave Farley, co-author of the Jolt Award winning book Continuous Delivery
In many ways, Continuous Delivery extends Agile development principles to production and operations but implementing Continuous Delivery is hard. So many tools claim to “implement Continuous Delivery” but in reality, it requires both technology and organizational improvements.
How can we embrace an evolutionary architecture paradigm and a more iterative approach to improving the design of our enterprise systems? How can we evolve into a more high performing organizations trying to always get better? And what patterns should we adopt to increase throughput, stability and quality as we deploy software more frequently?
IoT, Serverless, Augmented Reality, Machine Learning - there is one platform uniting and connecting it all: mobile devices.
Everything is possible now on mobile devices. And everything is expected to happen on a mobile device. Businesses rely on it and we depend on it. But what are the actual cutting-edge mobile technologies and tools that we need to deliver better apps and satisfy the faster-and-faster time-to-market needs?
Software Architecture is as important as ever. Newer distributed architectures like event-driven, microservices and serverless are increasing in popularity and adoption but much of the world still runs smoothly because of solid application architectures. These cohesively coupled monoliths often solve problems in the simplest way and might be the right choice for some organizations. But it still critical for architects to understand how to take existing applications and migrate them to microservices or other service-based architectures.
Microservices provide valuable benefits to solving real-world problems by enabling continuous delivery and deployment of large, complex applications. Event-driven architectures can help organizations strategically optimize new digital business moments - so much so that event-driven was included in Gartner’s Top 10 Strategic Technology Trends for 2018
Understanding these different technologies, trade-offs and practices can directly impact an organization’s long-term success. Today’s software architects must design, implement and deploy solutions that are both effective and versatile for our changing world of software. Which technologies and practices can help or hinder when dealing with legacy enterprise architectures? How can architects create hybrid architectures that take advantage of both application and distributed architectures?