#70 in Computers & technology books
Use arrows to jump to the previous/next product

Reddit mentions of Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler))

Sentiment score: 24
Reddit mentions: 34

We found 34 Reddit mentions of Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler)). Here are the top ones.

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler))
Buying options
View on Amazon.com
or
    Features:
  • Addison-Wesley Professional
Specs:
Height9.4 Inches
Length7.35 Inches
Number of items1
Weight2.1164377152 Pounds
Width1.3 Inches

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 34 comments on Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler)):

u/unborracho · 14 pointsr/devops

The Jez Humble / David Farley book on Continuous Delivery is a must read from a standpoint of teams that deliver solutions in an automated way. More oriented towards software developers than operations / IT but really a must read for both types of folks for us to all come together as "DevOps".

edit: Amazon Link: https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

u/ibsulon · 12 pointsr/webdev

I'm available for $140/hr. ;)

Honestly, there is more than one right way, and it depends on your particular architecture. However, there are a few obvious ones, and I apologize if I'm insulting your intelligence:

  1. There must be an automated process to build artifacts.
  2. Deployment-specific details must live outside the source code. (IE server locations, database details) - These details can be picked up from properties files or the environment, depending on your system.
  3. Every build must be built by a CI server. A great free one is Jenkins. The build must pass tests every time. It must be deployed to a development or staging server automatically.
  4. Ideally, the same process should take you to production, but it depends on the environment. Either way, it must be completely automated. I know places that have many servers. They do the automatic deployment to 10% of servers, watch what happens, then finish the deployment, but it's all automated.

    From there, it's details.

    http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 is a bit repetitive, but it is "the book" for figuring out the right way.

    As for the time to implement it, it's a bit of a virtuous cycle. The more you automate, the more you have time to automate.
u/FetchKFF · 11 pointsr/devops

You've got a lot of foundational knowledge to pick up if you're going to successfully complete this. And fair warning: for people with excellent domain knowledge, the task ahead of you can take months. My suggestion would be to read the following resources, to figure out what extra knowledge you need from there.

Jez's book on Continuous Delivery - http://www.amazon.com/gp/product/0321601912/

Paper on CI with Docker - https://www.docker.com/sites/default/files/UseCase/RA_CI%20with%20Docker_08.25.2015.pdf

Pattern of CD with New Relic - https://blog.newrelic.com/2015/03/06/blazemeter-continuous-delivery/

CI/CD in the web development space - https://css-tricks.com/continuous-integration-continuous-deployment/

u/bshacklett · 11 pointsr/git

It seems as though you're focused on a particular method for your deployment strategy right now and it may be useful to take a step back and see if your requirements have already been met by recommended patterns.

What you're describing in the above is an integration workflow, and there are many well tested strategies already out there for integrating code (depending on your language/framework, this work may already have been done for you). Most importantly, these strategies rarely suggest cyclical actions like your FTP^(*) transfer from html back to "Remote1".

Ideally, you want your flow to look something like the following:
git repository -> integration steps -> build -> build artifacts -> artifact repository -> deployment tool -> deployed code

Note how this is a unidirectional flow, with nothing being pushed back into the Git repo. You may need to pull artifacts in from multiple build pipelines depending upon your requirements, but if you see the flow being reversed (committing back to git), you should carefully consider why you're doing so and ensure that there's not a better solution^(†). Additionally, while you may very well not need some of the previously listed steps, you should understand what each of them is and make an informed decision about whether or not it should be included.

I would strongly recommend spending some time diving in to Continuous Integration and Continuous Delivery/Deployment (CI/CD) and the patterns used within those methodologies, keeping in mind that CI/CD does not start with automation, but with understanding your code and requirements. One book I would strongly suggest is Continuous Delivery by Jez Humble and David Farley (https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912). It's starting to show its age a bit, but the fundamental ideas are still there and there's a lot of value to be gained from reading it.

Focus your reading on the what and the why before you move on to the how. There are tons of articles out there on how to build fast and efficient CI/CD pipelines, but if you don't have a solid understanding of why the pipeline should be there in the first place and what value it adds, it's easy to see articles of this nature as pointless exercises or end up building something that doesn't really fit your requirements.

​

* This is a warning sign, by the way. If you find yourself thinking that FTP is the solution, consider it a red flag; step back and re-evaluate.

† There are some common patterns which result in committing code back to a git repository, but they solve rare use cases and tend to generate a significant amount of debate. In particular, some say that generated code should be committed back to a repository. As an alternative, I would suggest considering generated code to be a build artifact and storing either it or resulting binaries in an artifact repository rather than trying to commit back to git.

u/deadbunny · 7 pointsr/linuxadmin

Disclaimer, I have zero Ruby experience, I also suck at explaining things.

Cucumber/Aruba appear to be testing frameworks, I have no idea how they work specifically but the general gist of Test Driven Development is you write a test for what the code should do, then you write the code to pass the test (functions as specified), developers write code, run tests, release code (in short).

So assuming they are wanting this formalised by you I am guessing they want some kind of automated testing/continuous integration server. This would essentially be setting up a Jenkins server with a bunch of projects (one per code base) that uses the tests (developer written).

Example:

No CI:

You have a git repo for "project-awesome", (you'll very likely find) developers branch off of master into a feature branch, work on their stuff then merge into master when done, this will then either get packaged up or deployed from git (either manually or by a config management system).

With CI (very basic):

master is the release branch, developers branch off of a development branch do their stuff and push back to the development branch when done, from here Jenkins see the branch has new commits so runs the tests on the code automatically and if they pass merges them into master which then gets deployed, if not it'll fire off an email to whoever committed the code and give them the failed test results.

The aim here is that testing is completely automated which means even if I'm working on feature X but my changes break feature Y it will be caught (in theory) by Jenkins meaning there is a reduced risk of breaking production but pushing bad code.

I just found this page which should hopefully cover some of the general idea better than I can (and it has pictures!), personally I'm a big fan of CI and use it for everything that makes sense (code, scripts, config management states). With that said it's a pretty daunting subject if you have no idea what you're doing (like me most of the time!) and you'll need to do a fair bit of reading and I would heartily recommend this book

u/mr_chip · 7 pointsr/sysadmin

Some more:

The Art of Scalability - Well, the first 2/3 of it.

Scalability Rules - Very good, very short.

Continuous Delivery (I'd argue this is the single most important book in the software industry in the last 10 years)

The Little Redis Book - Free download!

The Little MongoDB Book - Free download!

The Varnish Book - Available as a free PDF if you fill out a marketing form.

u/EgoistHedonist · 5 pointsr/devops

Ouch! Sounds like you don't have any part of a continuous delivery pipeline ready. I would start from there before doing anything else. This book covers the whole concept nicely: https://www.amazon.com/gp/aw/d/0321601912

We were in exactly the same situation year ago when I started. The production was last deployed months ago because it was so painful. Now we deploy over ten times per day and the whole 15min process is automated. Deployments have become a non-issue and there's no need for anybody to be on-call or lose their sleep because of it. :) It has also made a huge difference to our productivity and allows us to crush the competition via rapid innovation.

u/ephos · 4 pointsr/PowerShell

It stands for Continuous Integration Continuous Delivery. To plug /u/KevMar's own blog he did a good write up on setting up a CICD pipeline for PowerShell modules. I also threw in 3 other links to some of my favorite blog posts on it.

u/blackertai · 4 pointsr/softwaretesting

Agile Testing: A Practical Guide

Continuous Delivery

Clean Code

Obviously, after this you can expand more in the direction of your particular product needs. I've been doing a lot of reading around CI/CD process, and the overall trend towards "DevOps". But you might want to focus on security or performance testing, and that will have its own path.

u/bluescores · 4 pointsr/devops

Hello and welcome to the club! To answer some of your questions:

Books. The aforementioned The Phoenix Project and The DevOps Handbook are both great resources that will help you understand what devops aims to do. The handbook has a lot of great "what am i doing (and why am i doing it)" explanations and practical implementation advice. I would add Continuous Delivery to the list as well. Because it lacks the Goldratt-inspired language of The DevOps Handbook it's a little more to-the-point regarding "what am i doing (and why am i doing it)" imo and easier to read and understand in one pass if you're looking to dive in quickly.

We don't know what what your company produces or the exact scope of resources you manage, but AWS is a big, robust ecosystem, and it's a great place to get started with devops. Don't be too worried about limiting yourself. After all, AWS is the biggest cloud platform provider in the world right now; it's a really big pigeonhole to land in. Any patterns you apply to AWS can very likely be translated to other cloud providers.

> Lastly, what are your thoughts on devops vs software engineering?

Software engineering is the "dev" in "devops". Unfortunately a lot of companies hire "devops engineers", but really they expect very little "dev" and a lot of "ops". Developing apps and infrastructure together is great way to optimize your stack and deployments. It's the quintessential devops move. Take advantage!

u/shaziro · 4 pointsr/SoftwareEngineering

For testing, I liked this one: https://www.amazon.com/Growing-Object-Oriented-Software-Guided-Tests/dp/0321503627

For version control, continuous integration, continuous delivery, this was a good read: https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

There are many extreme programming books that briefly talk about pair programming. If you want a book specifically on pair programming only then there is this: https://www.amazon.com/Pair-Programming-Illuminated-Laurie-Williams/dp/0201745763

There are thousands of books out there for learning various topics of software engineering.

u/YuleTideCamel · 3 pointsr/learnprogramming

Read:

u/slowfly1st · 3 pointsr/learnprogramming

> Should we start learning how to build for Android, iOS, or some cross-platform tool? What are trade offs for each?

For instance https://ionicframework.com/

But honestly, as long as you don't need to develop native, as /u/Xen0_n mentioned, I'd go with a progressive web app. You write it once and it runs in all modern browsers. You also have access to e.g. GPS, can send push notifications, etc. But make sure, a PWA provides all the technical features you need! (Proof of Concept! -> I need to decide on the tech stack by the end of this month)

It's also important to consider your team's abilities. If everyone is a python developer, don't use c#. If everyone knows angular, React is probably the wrong decision. If there's not enough knowledge and experience present in the team - the people of a team can usually give quite good feedback about technologies (complexity, learning curve, if it's fun to work with it)

​

>What are common components of an app's architecture that we will likely have to think about? I know we'll need a front end and a back end with a database, but I'm guessing we'll need to consider things like communications with the server storing the database? -How do apps link these components together/let them talk to each other?

Usually Multitier architecture. E.g. the front end communicates with a REST-api, rest API with a business layer, business layer with a persistency layer. What you use (programming language and back end) will determine how the communication will work. With Java and a relational database it will be most likely be JDBC with the given driver of the DBMS.

But also think about the cloud - this has some impact on the software architecture (aka could readiness).

​

>What are common mistakes when making early design decisions that cost you down the line in efficiency and maintainability?

From my experience:

  • In general violating basic object oriented design principles (SOLID, cohesion, coupling,...), e.g. passing around Objects from the OR-Mapper directly to the client, instead of designing API's. Or bi directional dependencies of packages.
  • not applying good software development and delivery practices (software delivery pipeline, high test coverage, high quality tests, code reviews, release and deploy regularly, decoupled architecture..). You should never be afraid to change your software.

    ​

    > What should our development process look like? Simultaneous front end and back end development? Back end before front end?

    Don't split the team into front end and back end if you can avoid it. Only if the team is getting too large to be effective, a split should be considered - having two teams will usually end up in finger pointing. Better is to code by feature. And split up a feature into smaller tasks (work in small batches), think about MVP: A small batch which already generates value to the customer and also generates feedback. It doesn't need to be feedback from production, but can be from a customer.

    How you write and deliver software - from requirements engineering, UX testing, actual coding and whatnot to deployment into production - is a really large subject. And there's no 'one size fits all'-approach - every environment is different. I'm a disciple of agile software development: The Manifesto for Agile Software Development and Continuous Delivery (and: Accelerate).

    Important is, that you guys always improve the process (as in continuous improvement). Not only within the team, but also and especially with the customer.

    ​

    Another thing: Don't forget security. The outcome of a security audit can be painfully expensive.
u/oinkyboinky7 · 2 pointsr/devops

No problem.



1.) Is there any option to avoid Windows cause I didn't really grew on it and Linux is much more familiar to me, I can install powershell as a snap I think but I want avoid Windows as much as possible

​

I'm the opposite: grew up on Windows and I avoid Linux haha.

​

That said, there certainly is the option to simply ignore Windows (there are a lot of Linux only shops out there), however, I would not recommend avoiding it completely.

​

Most IT environments that you will end up in have a mix of Windows and Linux. You will be much more attractive to some employers if you can say that you know Windows basics and are comfortable navigating the platform, rather than saying that you are Linux only.

​

To add, a lot of enterprises run Windows only tools (ActiveDirectory, Team Foundation Server, etc.).

In order to interface with these tools, you'll need to know some Windows basics.

​

At the end of the day, keep in mind that when looking at an enterprise sized network, the operating systems are just one piece of the puzzle. You'll want to be able to step back, visualize the entire network, and not really care what OS is running on the nodes. You'll want to know about the protocols the servers use to communicate (http(s), smb, ftp, etc), how to automate processes regardless of platform, and some basic security concepts.

​

2.) What cloud experience should I pick first, I have small budget of 13 euros for Udemy courses so I can pick only one and I want to do it wisely

​

Go with AWS 100%. They are the most popular at the moment and a lot (if not most) of what you learn will transfer to the other providers.

​

Try to study for the AWS Solutions Architect certification.

​

The practice exams here ( https://www.whizlabs.com/cloud-certification-training-courses/ ) are invaluable since they will explain why certain answers are correct/incorrect.

​

3.) Would learning Apache, PostgreDB/mySQL be worth ?

​

At the end of the day, Apache is basically a web server and the other are databases.

​

For a DevOps role, perhaps look into automating the database component of an application deployment.

​

4.) Which should I pick first to learn about concept

​

Look into CI\CD pipelines. CI = Continuous Integration and CD = Continuous Deployment.

Basically, these pipelines deploy an application to web servers when developers check in their code.

One has to build them using different tools. Have a read here ( https://semaphoreci.com/blog/cicd-pipeline ).

If that interests you, this is basically the Bible on CI\CD ( https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 ).

​

Any more questions feel free to ask.

u/Purplemarauder · 2 pointsr/devops

If you've not got a copy of Continuous Delivery bu Humble and Farley I'd really recommend it. It's chock full of best practices with reasons as to why, what etc.

https://www.amazon.co.uk/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

Sometimes it's easier to point to something that's published rather than have it seem like it's just your opinion I've found. :)

u/phuber · 2 pointsr/dotnet

If you are open to it, here are a few good reads to help you on your way. The legacy code book may pay dividends quicker given your situation.

Clean Code: https://www.amazon.com/dp/0132350882/ref=cm_sw_r_cp_apa_ruhyybGGV0C34

Refactoring: Improving the Design of Existing Code https://www.amazon.com/dp/0201485672/ref=cm_sw_r_cp_apa_gwhyyb1VRNSKK

Working Effectively with Legacy Code https://www.amazon.com/dp/0131177052/ref=cm_sw_r_cp_apa_0whyyb3Y604NJ

Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler)) https://www.amazon.com/dp/0321601912/ref=cm_sw_r_cp_apa_JxhyybA08ZQF8

u/tabolario · 2 pointsr/rails

Hi and sorry for the late reply! The first thing I'll have to ask is what environment are you deploying into, a manually configured virtual machine/bare-metal machine, Heroku, Ninefold? Each of these environments have different (sometimes vastly different) considerations when it comes to deployment of any application. In general though, here's some things that will apply that will apply to any good deployment process (some of what's below echoes /u/codekitten's reply):

  • Remove ALL credentials from your codebase: I can't stress this enough, and even for a simple project it's a good habit to get into early on. It's been enough of an issue that there are even dedicated tools to help people remove hard-coded credentials from their codebases. A good resource to explain both this, as well as the general concept of storing environment-specific configuration data outside of your codebase is this section of the Twelve-Factor App website. Personally, aside from things like tokens that Rails uses internally like Rails.application.config.secret_key_base, I will always use environment variables coupled with something like dotenv or direnv to also manage the configuration for my local development environment.
  • SSL and HSTS: IMHO there is no (good) excuse nowadays to serve a web application over HTTP. Once again, even for simple projects it's a good habit to get into and a good thing to learn. If you're hosting your application on Heroku, all Heroku application subdomains (i.e. rxsharp.herokuapp.com) will respond to HTTPS, but it's up to you to ensure your user's will always use SSL. Rails has the force_ssl setting to do this automatically for you, which you should have turned on in all of your production and production-like environments, but you should also be using HSTS to ensure that your users always visit your site over SSL (force_ssl performs a permanent redirect to https://rxsharp.herokuapp.com but does not set the HSTS headers). The gem that I use most often to take care of setting these headers for me is secureheaders, which also helps you configure a number of other security headers like Content Security Policy headers.
  • Continuous Integration: Let me expand a bit on /u/codekitten's item for passing tests to say that you should have a system in place that will automatically run all of your tests each time you push to your repository and holds you accountable when things break. Continuous integration is a huge topic that I won't dig too much into here, so I'll just point you two two indispensable books on the subject: Continuous Integration: Improving Software Quality and Reducing Risk, and Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. Conceptually, you will learn almost everything you need to get started and thensome on the topic from those two books. Once you get your CI configuration in place, you will get in the wonderful habit of always making sure your build passes locally before you push to your repository. A good CI script will:
    • Run static analysis tools like RuboCop and Brakeman
    • Run all of tests
    • Notify you when a build fails and when it gets fixed
  • Automate Everything: One of the most important things to learn about deployment early on is automation. Apart from initiating the deploy (and arguably even initiating the deploy itself), everything about your deployments should be automated to the fullest extent. There are several tools in the Rails world that most people use to accomplish this, most notably Capistrano and Mina. If you are using a platform-as-a-service like Heroku or Ninefold, see the documentation for one of those on how to automate various aspects of your deployment process.
  • Deployment Smoke Testing: In my experience working in the Rails world, it seems that not a lot of people automate their post-deployment verification, even though it's very easy to do! It can be as simple as having a post-deployment hook that uses curl to hit a status page on your site that returns the currently deployed revision, and rollback the deploy if it receives an error. It can also be as complex as running a suite of RSpec examples that utilize something like Serverspec to assert the state of each one of your application servers (obviously this one doesn't work as easily in environments like Heroku). In the end, the important things here is that you automate EVERYTHING when it comes to your deployments.
  • Database Migrations: First of all, don't forget to run them! If you're using something like Capistrano to script your deployments, the command to run a deployment that includes a database migration is cap production deploy:migrations, not cap production deploy. On Heroku, you need to run them manually after you deploy using something like heroku run rake db:migrate. One further topic here that I highly recommend you explore is that of zero-downtime migrations. A great introductory article on these is Rails Migrations with Zero Downtime over at Codeship.

    These things are all general items that belong near the top of any checklist for deployment (Rails or otherwise). Hope this helps!
u/WanderingKazuma · 2 pointsr/softwaretesting

So the difference between 1 and 2 is basically two different modes of delivery. 2 is the traditional large version changes. You'll see this with Adobe CC products, and other large software projects with larger teams.

1 is what you will see with like Chrome, Firefox, and other projects where you might not be aware that an update was made, but still see bug fixes and new features as you use them. This is the continuous delivery model.

1 will require a lot of automation, and a very good CI/CD platform setup. There are plenty of resources around how to get that setup properly. You can always start with versions, and then move into a CI/CD continuous delivery model when your automated process is more fleshed out.

https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/ would be the resource for CD.

As for your other question, yes I have been working professionally as a QA team lead and/or Manager for almost 10 years now. My focus is really automation testing, but I have worked with many companies setting up QA and SDLC processes.

u/ferstandic · 2 pointsr/ADHD

I'm a software developer with about 5 years of experience , and I used to have the same sorts of problems where I would over-commit to getting work done and under-deliver. To summarize, I changed to where I only commit to tasks that will take 1-2 days or less at a time, and I make it very very public what I'm working on in order to manage both my and my team's expectations. Here are the gritty details (ymmv of course):


  1. I got my team to start using a ticketing system and explicitly define what we are working on with explicit acceptance criteria for each ticket. That way you know where your finish line is. There other huge benefits to this but its outside of the scope of your personal workflow. This of course takes buy-in from your team, but at the very least start a board on trello with "todo", "in progress", and "done" columns, and try to keep the number of items "in progress" to a minimum, and work on them until their finish. A cardinal sin here is to move something from "in progress" back to "todo". This thing you're setting up is called a kanban board

  2. I break the work I do into 1 or 2 workday 'chunks' on our team board, so I don't lose interest or chase another issue before the work I'm doing gets finished. Keep in mind that some workdays, depending on how heinous your meeting schedule is, a workday may only be 4 (or less :[ ) hours long. An added bonus to this is that its easier to express to your team what you're working on, and after practice chunking up your work, you and they will reasonably be able to expect you to finish 2-3 tasks a week. There are always snags because writing software is hard, but in general smaller tasks will have a smaller amount of variability.

  3. As I'm coding, I practice test-driven development, which has the benefit of chunking up the work into 30 or so minute increments. While I'm making tickets for the work I do, i explicitly define the acceptance criteria on the ticket in the form of tests I'm going to write as I'm coding ( the bdd given-when-then form is useful for this ) , so the flow goes write tests on ticket -> implement (failing) test -> implement code to make test pass -> refactor code (if necessary)

  4. This is a little extreme but I've adopted a practice called 'the pomodoro technique' to keep me focused on performing 30-minute tasks. Basically you set a timer for 30 minutes, work that long, when the time elapses take a 5 minute break. After 5 or so 30-minute intervals, you take a 20-30 minute break. There's more to it, but you can read more here. Again, this is a little extreme and most people don't do things like this. Here is the timer I use at work when its not appropriate to use an actual kitchen timer (the kitchen timer is way more fun though). There's a build for mac and windows, but its open source if you want to build it for something else.


    Side note: in general I limit my work in progress (WIP limit) to one large task and one small task. If there are production issues or something I break my WIP limit by 1 and take on a third task (it has to be an emergency like the site is down and we are losing money), and I make sure that whatever caused the WIP limit to break gets sufficient attention so that it doesn't happen again (usually in the form of a blameless postmortem ) . If someone asks me to work on something that will break the WIP limit by more than one, then I lead them to negotiate with the person who asked me to break it in the first place, because there is not way one person can work on two emergencies at the same time.

    Here's some books I've read that lead me to work like this

u/clappski · 2 pointsr/learnprogramming

Yeah, so do I. Beck is one of the original proponents of unit testing (and TDD), and this book is a very strong starting point.

After this, I would definitely look into reading something about integration/end2end testing and continuous deployment, like Continuous Delivery. We recently had David Farley do a talk at our work, and he has some very important things to say.

u/bobik007 · 2 pointsr/learnprogramming

Ideal resting strategy is explained here (preferably with some exploratory testing on top, but they require time):

http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
http://martinfowler.com/bliki/TestPyramid.html

This pyramid is easy to achieve in applications with architecture based on micro services (Netflix, Amazon).

The problem is with huge, monolithic applications which doesn't have separate components communicating via REST API. In that case you have to rely on GUI level testing (Seleniums) which require a lot of maintenance work.

There is tendency in the software engineering community to deliver software almost immediately on production. This makes changes delta a lot smaller (fewer changes in release = less risk of bugs in release). This enforces strong emphasis on testers to automate everything which is possible and test on production.

You can (and should really, if you consider a career in engineering) find explanations of those ideas in this book
https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

u/TalosThoren · 2 pointsr/django

A properly implemented deployment cycle should be a press-button operation. Every environment, including production, should be able to be updated (and rolled back if necessary) unattended and automatically.

I highly recommend this book to anyone presently babysitting deployments.

u/CSMastermind · 2 pointsr/AskComputerScience

Senior Level Software Engineer Reading List


Read This First


  1. Mastery: The Keys to Success and Long-Term Fulfillment

    Fundamentals


  2. Patterns of Enterprise Application Architecture
  3. Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions
  4. Enterprise Patterns and MDA: Building Better Software with Archetype Patterns and UML
  5. Systemantics: How Systems Work and Especially How They Fail
  6. Rework
  7. Writing Secure Code
  8. Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries

    Development Theory


  9. Growing Object-Oriented Software, Guided by Tests
  10. Object-Oriented Analysis and Design with Applications
  11. Introduction to Functional Programming
  12. Design Concepts in Programming Languages
  13. Code Reading: The Open Source Perspective
  14. Modern Operating Systems
  15. Extreme Programming Explained: Embrace Change
  16. The Elements of Computing Systems: Building a Modern Computer from First Principles
  17. Code: The Hidden Language of Computer Hardware and Software

    Philosophy of Programming


  18. Making Software: What Really Works, and Why We Believe It
  19. Beautiful Code: Leading Programmers Explain How They Think
  20. The Elements of Programming Style
  21. A Discipline of Programming
  22. The Practice of Programming
  23. Computer Systems: A Programmer's Perspective
  24. Object Thinking
  25. How to Solve It by Computer
  26. 97 Things Every Programmer Should Know: Collective Wisdom from the Experts

    Mentality


  27. Hackers and Painters: Big Ideas from the Computer Age
  28. The Intentional Stance
  29. Things That Make Us Smart: Defending Human Attributes In The Age Of The Machine
  30. The Back of the Napkin: Solving Problems and Selling Ideas with Pictures
  31. The Timeless Way of Building
  32. The Soul Of A New Machine
  33. WIZARDRY COMPILED
  34. YOUTH
  35. Understanding Comics: The Invisible Art

    Software Engineering Skill Sets


  36. Software Tools
  37. UML Distilled: A Brief Guide to the Standard Object Modeling Language
  38. Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development
  39. Practical Parallel Programming
  40. Past, Present, Parallel: A Survey of Available Parallel Computer Systems
  41. Mastering Regular Expressions
  42. Compilers: Principles, Techniques, and Tools
  43. Computer Graphics: Principles and Practice in C
  44. Michael Abrash's Graphics Programming Black Book
  45. The Art of Deception: Controlling the Human Element of Security
  46. SOA in Practice: The Art of Distributed System Design
  47. Data Mining: Practical Machine Learning Tools and Techniques
  48. Data Crunching: Solve Everyday Problems Using Java, Python, and more.

    Design


  49. The Psychology Of Everyday Things
  50. About Face 3: The Essentials of Interaction Design
  51. Design for Hackers: Reverse Engineering Beauty
  52. The Non-Designer's Design Book

    History


  53. Micro-ISV: From Vision to Reality
  54. Death March
  55. Showstopper! the Breakneck Race to Create Windows NT and the Next Generation at Microsoft
  56. The PayPal Wars: Battles with eBay, the Media, the Mafia, and the Rest of Planet Earth
  57. The Business of Software: What Every Manager, Programmer, and Entrepreneur Must Know to Thrive and Survive in Good Times and Bad
  58. In the Beginning...was the Command Line

    Specialist Skills


  59. The Art of UNIX Programming
  60. Advanced Programming in the UNIX Environment
  61. Programming Windows
  62. Cocoa Programming for Mac OS X
  63. Starting Forth: An Introduction to the Forth Language and Operating System for Beginners and Professionals
  64. lex & yacc
  65. The TCP/IP Guide: A Comprehensive, Illustrated Internet Protocols Reference
  66. C Programming Language
  67. No Bugs!: Delivering Error Free Code in C and C++
  68. Modern C++ Design: Generic Programming and Design Patterns Applied
  69. Agile Principles, Patterns, and Practices in C#
  70. Pragmatic Unit Testing in C# with NUnit

    DevOps Reading List


  71. Time Management for System Administrators: Stop Working Late and Start Working Smart
  72. The Practice of Cloud System Administration: DevOps and SRE Practices for Web Services
  73. The Practice of System and Network Administration: DevOps and other Best Practices for Enterprise IT
  74. Effective DevOps: Building a Culture of Collaboration, Affinity, and Tooling at Scale
  75. DevOps: A Software Architect's Perspective
  76. The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations
  77. Site Reliability Engineering: How Google Runs Production Systems
  78. Cloud Native Java: Designing Resilient Systems with Spring Boot, Spring Cloud, and Cloud Foundry
  79. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
  80. Migrating Large-Scale Services to the Cloud
u/drawsmcgraw · 1 pointr/devops

Do you mean this book?

Searching for 'Continuous Delivery' returns a fair amount of noise...

u/planiverse · 1 pointr/sysadmin
I'm a Windows admin who wanted to learn more Linux. I asked a friend the same question as you recently. He recommended A Practical Guide to Commands, Editors and Shell Programming by Marc Sobell as well as Web Operations by John Allspaw and Continuous Delivery by Jez Humble. He recommended I start with Sobell's book before moving on to the others.

/ I haven't had much time to actually read any of this, but I trust his advice.
u/serial_crusher · 1 pointr/computerscience

This is going to sound a little sarcastic, but go on more interviews and you'll get the list pretty quickly. There's a lot of truth to it though. Interviewers ask the same stuff, so it's a smart move to do a few throwaway interviews with companies you don't like early on in the process just to warm yourself up.

Other than that, stay abreast of the buzzwords. Microservices, devops, react, redux, vue.js, es6. All of those are going to get you points in a job interview right now.

Here's a book I recommend: https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912

u/gtani · 1 pointr/statistics

ELNs: http://www.slas.org/awards/2012_JALA_Readers_Choice_Award.pdf

http://www.reddit.com/r/Physics/comments/1ewhlj/i_made_a_free_and_open_source_electronic_lab/

---------

Aside from that, guidelines from mainstream software engineering practice: git or other version control (and conventions on branching/ merging, commit messages etc), comments, style guide on code, unit/integration testing, code review sessions, modular code libraries for reusability, dependency management (at least makefiles), database design or data storage conventions.

You can also look into pair programming (requires common text editor/IDE) and collaboration for remote teams (GNU screen, tmux, skype), if those are applicable as fruitful endeavors.

This is a current book that covers these issues tho a lot won't apply in a research setting

http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/

u/guifroes · 1 pointr/softwarearchitecture

>How do you all approach designing a solution like this assuming you will not be able to acquire an existing application?

I'll ignore the whole DB selection thing, because this is the real question that you need answered (and will eliminate the need for the DB thing)


My approach would be: focus on solving the clients problem as fast and cheap as possible. Some principles to follow:

  1. Forget the whole scaling DB thing. Build the simplest thing that solves the problem the client has NOW.
  2. Show it to the client. Have the client use it. Get feedback. Don't skip this, it is important.
  3. Build using architecture and design techniques that will allow you to change the system as you and your client learn more about the problem at hand - you all gonna be surprised by how wrong you were at first.
  4. Repeat steps 1-3 for as long this app lives.

    I basically just described to you - in a super simplified way - the agile/lean way of building software.

    >I am willing to put in the time and effort to learn what I need to learn here -- I just need some guidance!

    Here are some book recommendations:

    Lean Software Development: An Agile Toolkit

    Clean Architecture

    Continuous Delivery


    I hope it helps. Feel free to reach out to me if you want to discuss this further.
u/pooogles · 1 pointr/sysadmin

>How did you get started in DevOps?

I watched https://www.youtube.com/watch?v=LdOe18KhtT4. I realised this was the future and if you wanted to be in a high performing organisation you need to do what they're doing.

Unless you're in an organisation that is willing to undergo the cultural change of Operations and Development working together you're probably not going to go far. Creating a devops organisation from scratch is HARD unless everyone is on board.

Looking into the technology is the simple part, try reading around the movement. Pheonix Project (http://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262509) is a good start, from there I'd look into Continuous Integration and Continuous Delivery (https://www.amazon.co.uk/Continuous-Integration-Improving-Software-Signature/dp/0321336380 & https://www.amazon.co.uk/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912).

If by this point you don't know a programming language you're going to be in serious trouble. Learn something, be it Powershell (and honestly you probably will want to move onto C# if you want to be amazing at what you) or Python/Ruby.

Honestly you should be working towards what Google does with SRE if you want to be at the leading edge. https://www.amazon.co.uk/Site-Reliability-Engineering-Production-Systems/dp/149192912X.