Speaker: Gavin Payne
An Introduction to Adoption Management - and why it's become relevant to the tech industry
For many years, organisations have widely recognised managing change has as a critical, yet rarely mentioned leadership skill. So why the recent interest by the tech industry in adoption management, the lesser known sibling of change management?
Part of the answer comes from the need for organisations to increasingly get all of their staff using new technology services to transform how they work. A messaging system only becomes useful when enough people you know also use it. The second part of the answer is the need for tech vendors that sell pay-per-use cloud services to make sure their end users actually start and then keep using their services – or else they get no revenue.
Adoption management can be considered to be the art and science of helping people to become more effective by using something new. It could be a mobile app to book gym classes, a security service to stop data leakage or a new collaboration system to share files. Whatever an organisation's need for adoption management is, there are standard ways to help people manage change - or in this case, adopt new tools.
This session explains the need for the tech industry to formally manage adoption and the academic models that explore why people adopt change at different speeds and in different ways. It then provides guided examples of how the speaker has led adoption management projects that increased the use of new services, but also made sure users were more efficient than they used to be. While these examples focus on adopting Microsoft 365 services, attendees could use them in a broad range of situations.
Speaker: Richard Griffiths, Database Technical Lead, Confused.com
DevOps for databases is a daunting task, but we have been on that journey and now manual releases are the exception. How did we get there?
We were your ordinary database development team with aspirations of automated deployment. Enviously peering over at the other developers passing their deployment packets to the automated mechanisms of the release team, while we still got up at stupid o’clock to press F5, crossing our fingers we were awake enough to connect to the right environment, sweat often running down our furrowed brows…
Code was integrated into a central TFS repository for the primary purpose of version control using visual studio database projects with CI builds for validation, but there was no explicit link between that code and deployment. Synchronisation was therefore manual; environment discrepancies were commonplace. The fear of state-based deployment destroying or rebuilding our data was a huge obstacle to overcome…
A glimmer of light shone through as extra publishing options in visual studio database projects gave us hope we could start deploying from source, with the safety net of preventing data loss...
Great progress was being made technically and a huge cultural shift was taking place within the database team. Huge amounts of effort was being made into automation, with the backing of a great IT culture we ploughed on. The reliability in guaranteed repetitive code deployment through our environments caused a huge reduction in test environment maintenance for the database team, benefits were already becoming apparent.
A member of the database team then embedded into the Configuration & Release team where new skills were picked up. This quickly led to another shift, using release pipelines instead of adapted CI builds for deployment. Communication and partnership between database, QA, release management and Ops teams grew considerably during this period, a great culture of change allowing for the new methodologies to be accepted with open arms.
In just a few months we’d gone from being an isolated team deploying changes manually, to using a SaaS VSTS solution with an automated deployment partnership across teams.
The hard work had been done. Ideas and skills bounced around between teams to remove any need for manual deploys, what else could VSTS (now Azure Devops) offer us?
We used Azure DevOps task groups to piece together common build and release tasks. Libraries allowed us to centralise configuration details like user names and passwords. Azure DevOps dashboards for the database team improved work management and any issues with our pipelines were immediately apparent.
SSIS and database deployments were explicitly linked to ensure dependent releases were now bound together and no awareness needed to be made about ordering of releases. This expanded our thinking into service deployment rather than just individual databases, releasing packages of working pieces of architecture. We delved into the wonderful world of PowerShell and automated SQL Agent execution in our pipelines to ensure everything still worked post-deployment.
Key vault is king. Who would have though a secure central configuration store could be so powerful. Key vault became our source for Azure DevOps libraries, it became our source for ARM template deployments, allowing us to deploy infrastructure as well as databases in single release pipelines.
Pester testing is perfect. Combining key vault details with pester the PowerShell testing framework allowed us to come up with our own repeatable generic infrastructure and database testing solution.
In order to prove this is all achievable, if the talking wasn’t enough, we’ll finish with a demo of there being no infrastructure to fully provisioning a PaaS database architecture in Azure, following up with making a change through automated CI builds and release pipelines in Azure DevOps.