Microsoft Azure OpenDev 10.2017
Articles,  Blog

Microsoft Azure OpenDev 10.2017


>>Hello, everyone. Today, we are at Azure OpenDev. This program is totally
not going to suck, it’s completely
live, which means, you get to see me
mess up in real time. I am starting out, with My Journey To Go, which is really interesting
because I get to drone on a lot about myself
and Open-source. So, a little bit about me. I spend most of
my life outside of the United States and I also come from a long line
of ninjas, apparently. My dad was a network
engineer for NATO, which means that computers
were always a part of my life. And I always had
the best gaming computer. No, really, I really
did, fight me. Dún was always at my house. I used to build it with
my dad spare parts because my dad always had to had
the latest and the greatest. So that was that
was cool for me. But sadly, I was
a girl in the 90’s, which means that I had
to play with Barbies. Because girls didn’t do
computers in the 90’s, that just didn’t happen. My parents said, “Ashley,
you have an artistic brain. Computers are fun, but you
should do something else.” So, I grew up and I
became a photographer. I did that for about 10 years. And during that process, I realized a couple of things. You can’t just go out
and take pictures. You need a website. So I learned HTML and CSS and
built a website. I needed a blog, so I built a blog. I needed to rank on Google, so I learned SEO. Everything was
self-taught and it was really hard but it
was also a lot of fun. What happened though, is that I learned that photography clients were not my target market. Photographers and
other businesses were. Turns out, nobody knew
how to build a website. And just like that,
I was a consultant, because everyone
needed a website. I was building websites
for photographers, for hair salons and
then eventually, I found myself doing
a consulting job at Cisco. So I quit photography. Because what’s the difference
between art and pizza? A pizza can feed a family of four and art can’t,
just so you know. So I made a decision to code. While front-end development
is still development, I wanted to do more. I was thirty two years old
when I learned to code. That is old AF, as the kids say. During that time, at Cisco, I found a couple of
small Open-source projects, OpenStack and OpenShift,
and I was contributing doing a documentation for OpenStack and some front end work
for OpenShift. I found that I really
liked the tech community. It was really
inclusive at the time. And so I started
contributing but I wanted to make even more
meaningful contributions. But I didn’t know
where to start. How was I going to learn? Everything in OpenStack
was in Python. How was I going to do that? So you know like when you go to a boot camp
and you super regret it? So I’m going to bag on
bootcamps a little bit. Sorry, you people
who went to boot camps. It’s not your fault. In 12 weeks,12 weeks of boot camp says they will teach
you the following things. I’m not going to
read it all to you because it’s a very long list. But I’m just going to scroll
through these really quick. Week one, two and three. This is an incredible
list, you guys. Four, five, six, and seven. In 12 weeks, you are
learning all of these things. Not impossible, it’s impossible. And just like that, you’re a software engineer. They give you
a card and it says, “Ashley, you are a
software engineer.” And I said, “I am not
a software engineer. I’ve been in this long enough to know what a software
engineer is.” So I made it my mission to teach new developers how to get in
to developing on their own. I created this introduction
to programming resources. It’s really great.
A lot of it’s free. Turns out these boot camps
just teach you what’s in these books for free. But I do understand that
a lot of people feel like they need
accountability of a class. So, there’s lots of
people out there who are willing to mentor
you or sponsor you. There’s a really great
blog post by Laura Hagen. Just Google it, mentorship
versus sponsorship, and you should find it. It’s great. So now we get to go. I am an accidental gopher. I was writing Python
and I found myself at OSCON in 2015-ish and
I met Steve Francia. Steve Francia is well in
the Docker and Go communities. He is now on
the Go team at Google, and he said, “Ashley, I’ve known you for five minutes. Will you teach
a Go workshop with me?” And I was like, “That is the dumbest thing I’ve
ever heard, Steve. Why on earth would I
do that? I’ve not even written a ‘Hello,
world!’ in Go.” And he explained
to me that he has a hard time relating
to the new developer. He’s been doing it for
a really long time. So it’s my job to ask all of these stupid questions and I did a really great job of
asking stupid questions. So we decided to do
this workshop at OSCON. And as you could see, it
is pretty decently rated, which is strange because I
have no idea what I was doing. In fact, I didn’t know
what I was doing the second time we did it or
the third time we did it. And since then, we’ve given
the workshop five times in three different countries
and I finally understand it and I know what I’m
doing. It took a long time. Lesson here, sometimes, I want to caveat
sometimes, is just say yes. What’s the worst
that could happen? You could end up in front of 200 people talking
about something you’ve no idea what you’re
talking about. But at the end, you
might learn something. And also, nobody succeeds alone. None of you have. I am lucky enough to have a lot of people
invested in my success. This is just a short list
of those people. So thank you, all of
the people on this list, for helping me and
answering my dumb questions. So now, we’re going to
get to contributing and eliminating excuses. See? Live. I’m not
a very good programmer. These are all my excuses, but I’ve actually heard
some of these from you, too. I don’t have a lot of time. My favorite one, I don’t
know what project to work on. This one, I hear a lot. Myth, you have to be a programming wizard to
contribute to Ope-source. This is a really damaging thing to say especially to newbies. There are lots of
things that you can contribute to an Ope-source. Ope-source is
a community of people just doing what
needs to get done. It’s not always code. There are lots of
things that you can do. So couple of things. One, we need people
of all skill levels. The wizards are great but
we also need the people like me asking
the silly questions because if you
don’t understand it, it’s likely somebody
else doesn’t either. Small contribution is
better than no contribution. Fix the docs, like
fix some grammar. Somebody needs to do that. The best project to start with is the one that
you’re working with right now. So, here are some other things
that I can talk about. Strengths, start with what
you’re good at right now. I don’t suck at graphic design. Go has a really cute mascot. It’s a gopher, that’s
why we’re called gophers, made by Renee French. So, I made lots and
lots of gophers, or lots of small projects
in the Go community. Eventually, it got so
overwhelming that I decided to make a website called Gopherize.me with
my friend Mat Ryer. Shout out Mat Ryer. It’s an avatar-generator
and it turns out that that silly contribution had a lot more impact
than I expected. InfluxDB uses it as
their class photo. It’s crazy, not
only that people are using the API in
new and interesting ways, and also making cakes. Somebody made a cake. That’s crazy. This repo is just full of
gopher images. It’s all it is. It’s 772 stars. That’s not insignificant. At least, I don’t think it is. I don’t have that
many followers on GitHub. So how did I know that gophers, these little gopher
images, were needed? I didn’t. I didn’t, really. I’m going to be honest. But
the best way that you can know what to do with your community is
to start by listening. Everything at Ope-source
involves other people. You’re looking to join a team. So, we’re listening. Mailing lists,
this is a big one. So, for many projects, a mailing list is the main
conduit of communication. You’re going to find
out lots of things that the project needs start there. IRC and Reddit, that’s
where the people who are contributing to the projects
complain about things. Figure out what they’re
complaining about and see what you can fix. Blogs, most of your heroes and core contributors
have a blog. Read it. Working with tickets, please work with tickets. Please do that. There’s
lots of bugs in Open-source. Diagnose a bug. Let’s do that. They are often poorly reported. Please diagnose a bug. Close fixed bugs. Sometimes, fixed bugs
just sit there. They just sit
there. Clean it up. It saves developers’ time.
Please do that. That would be cool.
And working with code. We all know that code is what makes Open-source
happen, right? So let’s talk about
that really quick. Beta test, please do that. Projects run on
many, many platforms. Test it out. See what happens. If it breaks, report it. I can’t emphasize this enough. Fix a bug. This is where lots of people get
started in Open-source. Fixing small things. Eventually, that adds
up. Write a test. Moar tests, please. You can’t have too many
tests, in my opinion. Some might argue. Don’t do that. Add a comment. As
I said earlier, if you are confused,
somebody else is, too. Docs. Please, please,
please help with the docs. Oftentimes, documentation is written from
the point of view of somebody who is actually
writing that project. It can seem like a manual. So somebody new coming in
has a different perspective. So help with that.
It’s important. It might seem tedious because
it is, but it’s important. Work with the community. Open-source is all
about the community. Answer a question. That’s the best way
to build people up. Answer my dumb questions, please. Write a blog post. If you find a bug
and you fix a bug, or you’re using the project in an interesting way,
blog about it. Tell people what
you’re doing with it. Improve a website. Sorry, most
programmers don’t have a lot of design talent
and that’s okay. But more than anything,
pay it forward. If you’re using a project and you’re not
contributing, shame on you. I feel like a hypocrite
with this quote, “Don’t be too proud to
accept help it’s offered.” I often am too proud to
accept help when it’s offered. However, I never
regret it when I do. Also, it’s okay to fail. I do, literally, all the time. If anyone says that they
don’t, they’re liars. Lying lying liars. And more than anything, we are a community of coders. But if all we do is code, then we’ve lost the community. Thank you. I am Ashley McNamara. And off to my beautiful
co-host, Seth.>>How’s it going everybody? My name is Seth Juarez, and as you know, open source, the reason why I
love Ashley’s talk is because it talks
about contributing, and how to actually contribute. And I’m here with Ryan Parks
who actually is from the place where a lot
of contribution happens. Why don’t you introduce
yourself a little bit my friend.>>I am. So, yes, thank you Seth. I’m Ryan Parks. I’m a Solutions
Engineer at GitHub. So, I’ve been there
about three years. And my role is basically making sure that people
know how to use GitHub, helping them use GitHub, helping companies change
how they make software. So, it’s very much in line with the themes that
Ashley was talking about.>>So, in the context
of open source though, GitHub has taken off. Why do you think that is?>>I think it’s
because it offered a different and
new way of working together that excited people and captured their imagination. That’s certainly, why I
was attracted to GitHub and why I started using it
before I worked there.>>So, my horror story. Every day I’ve had
this horror story. I was using this automated tool, and I automatedly wrote
some stuff to a directory, and it deleted
all my source code. This was like eight years ago, and it was at that moment that I decided it’s time to
use source control. So, if you’re not using
source control, shame on you. You’re going to be like me. And that code is still
actually out there, and I can’t fix it ever because I don’t have
it unless I rewrite it. But GitHub is different
than source control. It has source control
but there’s a little bit more special source. Why don’t you tell
us about that? And then tell us a little about how one would use
these workflows, these open source workflows
in the enterprise.>>Sure. Yes. So, I’m
going to be talking about something that we
call inner-sourcing today. That’s going to be
the topic of my talk today, and I have a few slides,
it’s not too many. I promise it won’t bore anyone. I want to talk
a little bit about why I chose this topic
to talk about, and the way I like to start
off a lot of talks like this when I don’t know how
to start is with a question. Maybe you’re familiar with this. And so, this is
kind of a thought experiment for everybody watching for us. And the question is, what’s
the most challenging part of developing software
in an organization? And I think at least part
of the answer to that is communication. Communicating what
we’re working on, communicating that
what you’ve just done, what you’re going to do,
is extremely important. And we need tools
that help us do that, because there’s more
communication going on today, and more software
developers today, using more tools today, than there ever have
been in the past. So, there is more communication, and we need tools that allow
us to do that in a sane way. And I think a kind of
humorous illustration of this need is this slide from the Cloud Native
Computing Foundation, which I find in a way it’s kind of funny
because it attempts to capture the landscape of all the different tools that we might use today as developers, and you might need
a magnifying glass to figure out what’s actually going
on in all of these. And my basic point is
just that there are so many tools that we use today. There’s a proliferation
of different tools. And they all require that we know what’s going
on at some level, like different teams are all
using these different tools, and they’re all trying
to work together. But what we need is actually not just another tool
to communicate. So, I made this slide
because I want to illustrate that it’s not just about adding another tool. Adding another tool may make this picture
more complicated, but it doesn’t necessarily
help you do anything better. I think we’ve all worked at places that had all
of the right tools, but none of the right processes. So, it’s not just about
what you’re using. And that brings me to why I chose to talk about
inner-sourcing because I think it’s a great illustration of the principle of not
just being about tools, but also being about
how you use the tools, the process by
which you use them. An InnerSource, just to quickly define it if
anybody’s not familiar with. It’s a newer concept, but it’s certainly not something that folks
wouldn’t be familiar with. So, it’s merely
the idea of taking successful and
productive practices from the open source world and applying them inside
of your own company. And that can mean that could be a thousand different
specific things, but it’s just
the idea of looking at projects that you
might use on GitHub today, that you contribute to today, that are doing things that are encouraging
people to contribute, or something that’s really cool, that you could be using
inside of your company.>>So, let’s see if
I understand this, because I looked at InnerSource and I was thinking, oh, is this another picture
we’re going to add to that whole picture
of development? It’s not. You’re saying these are
ideas that have been taken from successful open
source projects, and because there’s
a lot of them, and you’ve watched them
because you work at GitHub, you’ve watched them
and you saying, here are some of the principles
that they use in order to make their open source
project successful at GitHub. And you’re going to
explain what those are. I have an open source project, and I feel like it’s far from
successful because of some of the issues with
communication as you mentioned.>>Yes. So, I’m going to
be talking about a few of those ideas and
how you can start applying those to
your organization. And the critical thing
is that it’s all about studying what works in
open source and applying that, mapping that into
your organization. So, it’s not about taking exactly what folks do in
the open source world, it doesn’t always translate into how things are
done in a company, and that can be good
and it can be bad. So, there are trade-offs there. But it’s about applying
those specific principles. So, that’s why I’d
like to make sure that people come away with today with an idea and
some inspiration to actually go out there and look at how
their organization works, and think about some of these
ideas around communication, and how they’re using tools
and building processes, because effective
communication I’ll say that that doesn’t
happen by accident. It takes a lot of intention and planning to make sure that your in the right
environment to do that.>>All right. Well, I want to
dig in the principles here.>>Yes, yes. So, just quickly, I’ll show this slide
of Why InnerSource? I think we’ve covered
a lot of this, and I don’t need to
belabor the point here, but this is basically, just a rundown of why
you’d want to do this. More contributions, more
quickly, higher quality. So, it’s doing more with less. And just a really quick example of this that
folks can go check out, this is a graph
from O’Leary gains. They worked on age of ascent. So, the team there radically improved the performance of.NET Core over the course
of the few months. They actually got a cake
from the.NET team. They did such a good job. But I just wanted to throw
this out there as an example of what is possible
when you open up what people are able to
contribute to your projects. So, I’m not
promising that all of your projects are going to
be 2,300 percent faster. No, that’s not going to happen. But I can say that
this is the potential. So, there is an O’Leary gains within all of the companies
that we work at. That can be unlocked. It’s just about setting
up the right success, the right environment
for success.>>Got it. Cool.So I have
one one more slide. I think this is
the ultimate payoff, this is why folks
should care about inner sourcing and DevOps. So DevOps I saw a really cool definition of
DevOps that it’s something that accelerates your ability to deliver applications
and services. And by that definition, the practices that you apply with open source
definitely fall under that. And we can see where
those overlap with this, it’s a very cool developer enjoying a nice drink
under a shady tree. So this is the idea. This is why I think
this is exciting for developers because it has the potential to make
your life better.>>Anything that
improves my practice of communicating because
literally all software is a communication problem.>>Yes.>>I’ve had many software
failures and they all come down to I
understood the things wrong. And so I’m excited for any principles you have on
how to make this better.>>Awesome. Yeah, so
that’s all my slides. So hopefully that set the stage for what I’m
going to be talking about within my demo here. So why don’t we dive in.>>Let’s do it.>>Cool. So this is my GitHub organization.This is
often the first place that folks will see. And you can see I have a few repositories that
have been created in here. So, the repository that we’re
going to be looking at in a couple of minutes is
this Java calculator repository. And you can see a couple others that I just put up for fun, Terraform, Java calculator
that’s powered by a Habitat. So I guess the shout-out to
our other presenters today. These aren’t
real working examples, but this might be
in an organization. So the first thing that
I want to point out is the different visibility levels
of repositories. So there is private and
public repositories, and that’s the most basic level of visibility control
that you can have. So if you want everyone
to be able to see your project and contribute
to it, public is good. If you want to
control who can access it through Teams, private. And I’m using a GitHub
enterprise instance, so I can control who is
able to access this you URL. So, folks need to sign in before they can
look at anything. So it’s actually okay for
me to keep things public. It just means that they’re
public to my organization->>Got it. In Octo demo. In GitHub.com, the situation’s
a little bit different. If you want to keep
something private So if a company wants to keep
a project private on GitHub.com, it has to be private. So anything on
GitHub.com that’s public, is open to the world.>>So this enterprise is
just an instance of the software that you have running
as a demo purposes?>>That’s right.>>By the way, for
those that are watching, please make sure if you have any questions use #AzureOPENDEV, we are monitoring the questions and so we want to hear what
you have to say as well.>>Absolutely. Yeah, thank you. That’s right. Where was I?>>You were in the middle
of saying that on GitHub, the situation is
a little bit different because if you want
somebody private, it has to be private.>>Yes.>>Here it’s
a situation where it’s kind of private to all
of your organization, but then you can make
things private for a subset of your
organization as well.>>Exactly. Yeah. So if people who are watching
were to go to octodemo.com, they would have to log in. They wouldn’t be
able to see this. So something to note with
the visibility of repositories. And then one thing
that I’d like to point out within
an organization, beyond the repository
of visibility, is the default member or the default repository
permissions for members. So, by default, you can set certain levels of access
for organization members. In very open organizations, this might be set to write. So everybody that’s added to the organization can write
to all of the repositories. And that can be
a really powerful model for collaboration. So everybody by default
can write you things, but it can also be dangerous. Read, similar situation. Everybody gets read access to every repository whether
it’s public or private. Typically organizations
choose none. That gives you the most control over what folks are able to see. So that’s usually
what folks go with. So that’s the default
repository permissions. And I also want to show
Teams here as well. So Teams are how you organize access
to your repositories. So they’re very important. And I think it’s worth spending a little bit
of time explaining. Explain something
a little bit newer. So you can see there’s
a few teams listed here. So there’s a couple
secret teams, which I’ll talk
about in a moment, and then there’s this
employee’s team with this tab here and then a few other teams. So the first thing
that I want to show is this nested team. So, nested teams
are something we introduced recently
that can help you organize or rather put folks into more logical
groupings in terms of your teams. So you can see I expanded the parent and child teams here to show what’s
actually going on. And what’s great about organizing your members
in this way, is that if I wanted
to give read access to or write access to a certain repository
to all employees, so let me just open
a new tab and do that. So if I want to change the ability of all of the people in the
employees team to write, I will just have to
change that in one place. So it’s much simpler than going to a dozen different teams
and assigning them. It allows you to cascade
all those permissions down.>>And this is an enterprise. Is this just something
you can do in the public one as
well in GitHub?>>This is in GitHub.com
and GitHub enterprise.>>I did not know that.>>Yes.>>That’s amazing. Yeah. This is a really powerful way of
just organizing people.>>Yeah, so it allows you to reflect your organization to the extent that it makes
sense inside of GitHub.>>I see.>>So rather than having
multiple organizations which can definitely make
collaboration more difficult and visibility harder. You can have a single organization
with nested teams that reflect your
engineering structure typically since that’s who is
mostly dealing with GitHub.>>And that goes back to the notion of
communication because if you have to communicate
with two people, there’s just one line, if it is three, then there’s three lines, there four, there’s six, there’s choose to lines
of communication. If you start to break up, like where people naturally
communicate in the teams, that makes the communication
a bit easier as well.>>Yes.>>Cool.>>Yes. And the
layout of the teams I think it’s hard to
understate how important the teams and their
structure become to how people are able to access repositories that are
owned by the organization. And this is what I was
referring to when I was talking about taking a lot of thought
and foresight and planning. Many organizations
grow organically without people
thinking about well, who should have access to this? So I think it’s important to plan that out so
you can actually say, well maybe why don’t
all our engineers have access to this project, like our main codebase
for example, everybody should
have access to that. But I think the concern with that kind of model
is what is possible, like what are
the downsides that we open ourselves up to by allowing
that kind of creativity. And that’s why I’m going to show inside of the repository. So how you can open it up to
more collaboration without compromising the quality
or making sure that people don’t overwrite the history of
the repository for example.>>Yeah there are some things
on GitHub that I’ve done with Git in general
or I’m just like, I think I just destroyed
everything but I don’t know.>>Don’t.>>Yeah. So it’s better to be safe about
those kind of things too.>>Yes. Yeah, absolutely.>>So, really quickly, I just want to talk
about these other teams. So we looked at
this employees nested team and I just wanna show you
two other types of teams. So there is the secret team, and a secret team can only be seen or mentioned by
members of that team. So other people in the
organization that aren’t members, so anybody who’s not Ryan or
Hailo don’t know about 007, the team, and nobody outside
of that team can mention it. So this can be great if you have projects that
are sensitive, if you have contractors, whatever the case may be, where you want to isolate the visibility and access that other people have to that team. So those are secret teams, and I have another secret team
that’s just for bots. And the three teams at the end, I just want to mention
as this is something a colleague of mine Greg Paddick referred to as ad hoc teams. So there is the
official organization of your company and then there’s what people
are actually interested in or what they’re
experts in within that. And this is what Teams and ad hoc teams
and GitHub are great for. So I have a few here, one
for nodejs-enthusiasts. And what’s really
cool about this is that this cuts across all of the different cross-division and cross-department areas
that we have. So there are people from
different teams within or rather from different
engineering groups within technology who are all
part of this nodejs team. So it cuts across different
silos and allows you to create places where you have concentrated
knowledge on seinfeld, maybe not as useful as
sweet potatoes or nodejs.>>And that’s
interesting because then when you have certain
things that come up, like if people are talking
about let’s say you’re working on a certain feature
and you’re just like, hey let me just tag the nodejs-enthusiasts to
come take a look at this. Can you tag them all at the same time and then have them comment on these things too?>>Yes. So if they’re in a team
then you just out-mention the team and they automatically become
part of the conversation.>>And that’s awesome
because then you can have the functional team, so then you can have the
cross-cutting teams that can be brought in as necessary because obviously
they have other work, but if they love nodejs, heck, let’s just take a look at
this particular issue that people are having and this
other team. That’s awesome.>>Yep. So ad hoc teams, nested teams, secret teams, these are all the building
blocks that you can use to make sure that folks have access to the right things
within your organization. Cool. So with that, I am actually going to go show the app that
we’re working on. So, the repository I’m going to be demoing with is is
Java calculator repository. So super complex code base, it can do all kinds
of operations.>>Sure.>>Addition, subtraction.>>Yeah, the main ones.>>Yeah.>>Yeah.>>And it also has a arrest API. So this is a Java app
that gets deployed out to in Azure app, and
is running out there. So you can see I just have
the ping endpoint over here. So yeah, this is my repository. So let’s imagine for
a moment that I just open this repository up to
everybody in the organization. I just added this to
the employees team and gave everybody write permission. But now that that’s happened, I’m getting kind of nervous. What if somebody deletes the last 50 commits and
then force pushes it or breaks the build
and then that goes out to production like this is just being deployed
out into production.>>So it’s CICD’d already. And so if people
write to it, they might write something crazy.>>Right. Right
now there is no CI. So it’s just
whether it’s broken, whether it’s working, it’s just going out
there right now.>>So not as scary.>>Well, it’s continuously deploying or
continuously deploying failure as the case may be. The issue is that
right now I don’t have any unit test written in
this repository right now, so I have no idea of whether anything’s
been broken or not.>>Cool.>>So, I think
the first thing that I want to point out in this repository is the existence
of this read-me file. So this is really important. And you can see it’s actually displayed down at
the bottom here. If you’ve looked at
a GitHub repository I’m sure you’ve seen this before. And the read-me is
really important because it’s the entry point for
people into your project. So this is usually the first thing that
someone’s going to look at. So it’s important that after they look at this that they
know how to run the project, that they know where
to look if they want to contribute something if
they find something wrong. And this is a good example of a read-me because it has
that basic information there. So it has a description of what the project is at the top. And it also has a link
to this contributing, that MD file, which
I’ll talk a little bit more about in a moment. But those are
guidelines around if you want to open an issue
or make a contribution. How do you go about that Right? And then it has
all the instructions that you need to actually start building this
and running it locally. So those are really important. Great to have. And there’s
a few different files, they’re kind of
GitHub metadata files, I guess you could call them, stored in this repository that I want to talk
about for a moment. So I already mentioned
the contributing file. And this is important because it’s a more detailed
explanation of how you want people to contribute to your project,
so it’s aspirational. This is what an ideal issue
that’s open would have. So you can see
that this document directs people
where they can ask questions on
stack overflow or IRC, how they can file issues as well as a few
guidelines around what they should have done before they actually
file an issue. And then at the bottom
there is more details around what to do if you
actually want to contribute. And I think at the end, the section on feedback, is also great because
it gives people an idea of what they should expect
after they submit something. So it kind of paves the way for people to
contribute and makes sure that there is no or
minimizes the amount of the surprise or confusion that people go through
in common paths.>>So I have a couple questions
on this because I mean I’m looking at this
this is amazing. First of all the dot GitHub file
is that a convention? The dot GitHub folder is that a convention we should follow and just have
those things in there?>>So that’s
the convention that you can use to store these files. So let me just go back and show. So these are all files that have to do with
GitHub specifically. So we make it easy
to store those in a hidden folder in the repository that
doesn’t clutter things, but you can keep all
of these in here. So they can live in
the root of your repository. I like to keep them in a folder to keep them a little bit
more organized.>>Got it. And then
the other question is, when we’re talking
about issue template and pull request template, those documents are just saying
what you should do during a pull request or
does it help you create them or
what are those for?>>Yes, so I can show you what those actually do
really quickly.>>So there’s already an issue open about adding unit tests
to the repository, we don’t need to look
at that right now. But here is where those
templates actually come in.>>Holy cow, this is amazing.>>Yes. So up at the top
you can see there’s a link to the
contributing guidelines that I was talking
about earlier. And the issue template file that was in that
dot GitHub folder, has actually populated this
entire issue with a template. And the markdown file is just
that it’s just marked down, but you can see I actually use HTML comments
in here as well. So Mark down supports
HD and also you can actually get pretty
sophisticated with the formatting
if you want to, bit I just use it so I can provide instructions
in the template without having that
actually show up if people submit it
without filling it out. But this is what you can do. So you can define all of the major areas that you
want people to fill out, and then actually have that
in the issue when it’s open.>>That’s pretty amazing
because I usually get a lot of, it doesn’t work. And I have no idea what
they’re talking about. This year, just in
a social way is say, hey look, and the thing I like about
this is when you’re creating an issue that isn’t
a contribution. And so I love how you say,
before you open an issue, let’s talk about what
it means to contribute. Here’s something
that we would like to see and socially
you’re not telling, like hitting people
over the head saying, oh you didn’t tell me. Beforehand, the computer
is gently nudging people to communicate a little bit better,
which I really like.>>Yes. Yeah. It’s removing
some of the element of you needing to ask someone to do that and putting
that in front of them. So they already,
they know what to do.>>I love it. Well, we have
about six or seven minutes I don’t want to cut you off, but this to me is probably one of
the coolest things I’ve seen. You already probably knew
about this but I didn’t. So if you’re not using this, definitely we should
start looking around it.>>It’s new for a lot of people, so don’t feel bad
if this is new.>>Cool. So, since we only have about five or six minutes, I think I’m going
to skip ahead to showing a few of
the protected branch settings. So, how you can make sure
that quality doesn’t suffer, that people don’t
accidentally remove things when you’re working
in your repository. So, the two options that I’m
primarily concerned with. So that while
rather the first option is to protect the branch, right? So, if you protect
your master branch, let me go back one screen. So, protected branches, you can configure for any branch
in your repository. Typically, people do that
for their master branch, since that’s what’s
going out to production, what you’re cutting
and releases from. So, once you’re protecting
your master branch, which is the baseline for
what you would like to do, that disables force pushing. So, people can’t
alter the history of the git repository and
they can’t delete commits, which is a good start, right? But there’s
two other options that are also really important. So, require pull
requested reviews before merging can
be great for making sure that humans are
actually looking at code before it’s being
submitted, right? So, when somebody
opens a pull request, they must have somebody
else review it and approve it before they can
actually submit it, right? So, that’s an excellent practice for encouraging code review because people have
to do it, right?>>Sure.>>And there’s a few options
around here that make this either more
flexible or more useful. I think the most
interesting option is the require review
from code owners. And that uses
the code owners’ file, which I didn’t
have time to show, but it uses
the code owners’ file to determine who to
request for review. So, let me just show the code owners’ file
really quickly. So, it’s just either
files or pass or wildcards in your repository
and then teams. And when you open
a pull request, we look at what
files have changed and then we’ll automatically request review from
the right team, from whatever matches
we get from code owners.>>I feel like I’ve
been using GitHub wrong for many years. Seriously, this is amazing.>>This is also new. So again, don’t
feel bad if this is.>>No, I feel bad
because I’ll fix it, right? That’s the best part.>>So, code owners are great. And the second thing
that’s really important is requiring status checks
to pass before merging. So, this allows you
to actually kick out. You build your repository
in your CI system, and then the result of
that build is sent back into GitHub and it’s
displayed in the PR. So you can actually see, did the the build pass or fail? And if it failed, it’s actually not possible to
merge in your pull request. So, I have this second status
here marked as required. If that doesn’t return
successfully from Jenkins, I will not be able
to merge that, right? The merge button will
not be available.>>Right.>>So, that’s how I keep contributions from
breaking things. That’s how you can
put some guardrails around what folks are able
to put into the repository.>>Right. So, we have
about two more minutes. Can you like do
whirlwind through a couple of other things
that we should look at?>>So, let me say this. Let’s see if I
can show you this. So, I have code opened here. I’m going to clone my repository,
going to make a branch. And then I need my terminal. If only writing tests in
the real world are this easy. So, I’ve added my tests,
right? It’s done.>>Nice. I love
that hand-wavy net. He’s literally waving
his hands right now, but we didn’t put that
on because it looked ugly, just no offense.>>I’m waving my hands
all over the place. Thanks for telling everybody.>>No, but this is good because I think the important
part is I want to see how this actually goes into the workflow of
what you’re doing.>>Yes. So, I’m
just committing this locally and now I’m
pushing it up to my branch, and switching to
another window where I’m logged in as just a normal
developer, as Ryan. Cool, so I can actually see
that this was pushed up. And you can see the PR template actually populated all of this. So let’s create this, so I can actually
see what’s going on. So really quickly, you can
see that under the reviewers, the quality team was
requested for review. And you can see
because of the setting, my repository setting I made, code owner review is
actually required here. So, I cannot merge this until somebody from
the quality team says, “All right, this looks good.” And additionally, a required status check
has also failed. The test that I coded
intentionally had a failure.>>That’s what you get for
hand waving. I’m just saying.>>Yes. I was
waving with one hand and I typed a five
instead of a four. So, you can also see that this red box merging is
blocked because of that. So, that’s the two-minute
finale of all of that.>>Oh, that’s
amazing. So, here’s a couple of questions for
you from the audience. As this is Azure OpenDev, what
is an easy way to discover all the Azure-related
repositories on GitHub for searching and finding things?
How do you do that?>>So, there is actually a lot of great Azure
organizations out there. So, if you can use the search qualifier for organizations and
look for Azure, there is a lot of official
Azure orgs out there. I would also check out topics. So similarly, if you use
the search qualifier for topics, there are many
repositories on GitHub that have been tagged with
something called topics. So, I actually have
an example of that in the repository I have here. So, topics are below the description of
repository, right? And, I can actually show you in GitHub what this
might look like.>>Oh, snap. I did, I said, oh, snap. I’m so sorry, actually. I don’t. I’m sorry.>>So, these are all the
different repositories that have been tagged with Azure, right? So, this is a cool way
of exploring the different repositories that are out there on GitHub.com. So, there are about 1,200, so whoever asked that, they have a few things
to look at now.>>Awesome. Well,
this has been amazing. Ryan Parks, we are
going to be new bff’s. I’m going to be calling
you at night and be like, “Hey, help me.”>>You’re right.
You’re getting a shirt.>>I’m going to put a shirt on and he’s giving
me a shirt, okay? So it’s official.
We made it official. So coming up we have
Continuance Delivery of Infrastructure to Azure with Tyler.
Take it away Ashley. Oh, before, we actually have
an in studio audience but they have been so quiet. I was hoping they would laugh
just a little bit, you know, at my dumb jokes.
So what we’re really going for that. So just take a look at them,
you can see them right there. All right Ashley,
toss it over to you, my friend.>>So, we already
knew that you were terrible at GitHub Seth.
We knew that. I just want to point out that I stole
this hat from Tyler, I legit stole this hat. He says he has 200 more which is great because I’m
not giving it back. Also, I’m standing on a box, standing on a box
because there is so much.>>The guys from
the control room came out and.>>Taller than me.>>We should put
this box in here.>>Yes, so.>>Make sure you.>>So, you are welcome for that. Yes. So, let’s get into this. You don’t have to
squat now we’re good and talk about Jenkins. Jenkins has been
around for what? Like 10 years, so like.>>I’ve been in the Project for a little over eight
or nine years maybe.>>Okay.>>And there exists
a little bit before then the creator is now
my boss at cloudbees, Kohsuke Kawaguchi created it
at Sun Microsystems in 2006?>>Yes.>>And his sentence with Sun Microsystems is already a pretty old timey thing
to say at this point.>>Right.>>So, it’s been
around for a while.>>So, it’s old timey. So, tell me why.>>It’s for the sake.>>Why we should all
care about this now.>>So, one of
the cool things about working on a project
for this long, I think I’ve worked for
four different companies now. And then each company I’ve
brought Jenkin’s into that.>>Sure.>>Into that organization with the exception of cloudbees, which does the
enterprise Jenkins stuff. And the cool thing about
working on a project for that long is
it’s become very, very well-known,
very well adopted. It’s really actually
funny to be in a Microsoft studio talking about a Jenkin’s project on
my Linux laptop with Microsoft logos all over
the place like the world has changed dramatically
since the project started, which is both really promising, but it’s also really good for us because we’ve
been able to help drive some of that change
in the software industry. And what is really
powerful about Jenkins is, it’s a automation server sort
of at its most basic level. It helps with automation, right? In the beginning, that really
was continuous integration. It’s not been out here. So, continuous
integration was sort of where Jenkins’s bread
and butter was. And as the industry moved on, Jazz Humboldt and
a few other people started talking a lot about
continuous delivery. The folks from INVU started talking about continuous
deployment even, and Jenkins has been able to adapt and grow
with that because, A, it’s open source, and B, there’s a lot of
contributors that are bringing different plugins
integrations to Jenkins that make it sort of this infinitely
pluggable piece of automation technology that just about any organization
can take and just drop into their
existing infrastructure and make it work for them.>>Right. We live in like a continuous
everything world right now.>>Yes.>>Which is great I think.>>And Jenkins helps
with the continuous part.>>Yes.>>But it’s something
that as we’ve grown, the things that we did, I would say in 2006, aren’t necessarily the things
that we need in 2017.>>Sure.>>And so, over
the past couple of years there’s been a few sort
of major leaps forward in what we do in the Jenkin’s project and
how people use Jenkins. That is something that’s for me, part of my day job is actually telling people, No, no, no. Jenkins is different now. There’s a lot of
new stuff there. A lot more new stuff rather that is really,
really interesting, really, really easy to use
and really nice to look at..>>We just bonded
because I did that at Microsoft till now. Yes.>>It was weird for me because when I first
started with the project, I was, Kohsuke would sometimes refer to me
as the cheerleader.>>Yes.>>Like my entry into
the project for a long, long time was, I would
write stuff and blogs, I would write
some documentation, I would go speak about
it at events because Jenkin’s underneath all
the covers was Java. And for me as a Python and
Ruby developer, I found Java.>>Gross, what?>>I wasn’t going to say
gross but I was thinking it. It’s all over. I don’t particularly like Java. Java 8 however, is
actually quite nice. They’ve changed
a lot in Java land. But so, I came into this open source project
that I really enjoyed using, like what can I do?>>Yes.>>And the documentation
stuff came along, some of the outreach
type stuff came along and the infrastructure stuff
started to be an avenue that I started to contribute to
the Jenkin’s project. When you have in
the olden days before GitHub was around and sort of making everything
awesome for people, people had to run servers to run
their open source projects.>>I don’t even remember what life was like
before GitHub hubs.>>It was.>>People clicking
sounds together crazy.>>That’s about what it was.>>Yes.>>It was, you took
some SourceForge and some CBS and you tried
to make things work. And so, we had a lot of developers in
the Jenkin’s project, but no one wanted
to run the servers. Like no one wanted to
run the infrastructure to make the project go. And so, that’s actually
how I really started to get deeply embedded
in the project was, someone had to
keep the lights on. And I’m a gullible idiot
and I said I’ll do that. And so, that’s what I did
with my colleague Olivier. We work on a lot of
the infrastructure which is what brings me to
Microsoft and Azure.>>Cool. Well,
let’s get into that. Now that you’re going
to not demo things but let’s talk through this.>>Right. So, to sort of step back
a little bit in that. So, before I get to
the infrastructure stuff, let’s talk about
Jenkins Pipeline, which for me was
one of the big leaps that we’ve made over
the last couple of years. In the old stone clapping
together world of Jenkins, the way that people
would construct their automation is, they would go through
this WYSIWYG editor, and their tweeted
job configuration. And if you had something
like my normal project, I’ve got to build, it
I’ve got to test it, and I’ve got to release
it or deploy it somewhere.>>Right.>>Most projects fit
into that sort of build test deploy Pipeline. So, what people used to do, is you create a build job, you’d create a test job, you’ve created deploy a job, and you chain those together. And it was sort of
you could do it, and a lot of people did, but it wasn’t very clear to people. Users coming into
that Jenkins interface, what was actually
wired together, how everything was
configured and so, sometime before we released
Jenkins 2 last year, a number of people
within the project started working on this thing
that we called Pipeline. And Jenkins Pipeline is sort of, I would say it’s not
like jobs in that the goal is to model your entire continuous
delivery process, right? So, instead of just doing
the build stuff over here, and doing the test
stuff over here, and then to place that, put that all together because
that’s how we think about our software development
lifecycle, right? So, that was part of it. And then this sort of like X as code movement was
also a big influence. I started using Puppet probably five or six years ago and
about four or five years ago, I started to get sick of the infrastructure is
code sort of rhetoric. But with Jenkins, we
had a very strong need for the Pipeline to
be defined as code. When we talk about using
GitHub to keep track of things, the one missing link
for a lot of people was, how do we keep track of what Jenkins is supposed
to do, right? And so, we created
Pipeline, we created. Yes, I helped create Pipeline. I had nothing to do with it. I just like to talk
about it and use it. But some people
like Jesse Glick, Andrew Bayer, a lot
of the I would say, titans of the Jenkin’s
open source project came together and built out what
it is today Jenkins Pipeline, which is, by far the best way
to do things in Jenkins. I cannot. If you’re using
Jenkins, this camera is on, if you’re using
Jenkin’s, and you’re not using Jenkins Pipeline, you should be using
Jenkins Pipeline, full stop. Its flag in the ground. It’s so much better
than everything that was there before. I can’t stress that enough.
But it’s also very simple. So, on the screen what I
wanted to show was like the most simple
Declarative Pipeline.>>Right. So, I wanted to
ask a question about that. So, there are two different ways
to define Pipeline’s, Scripted versus Declarative. So, why would I use
one over the other?>>So, there’s a little bit of cultural tension in
the Jenkins project. On the one side, you
have people who are like, I want super awesome
power tools where I can just do whatever the hell I want, whenever
the hell I want.>>Yes.>>That’s what Scripted
Pipeline is really for. Scripted Pipeline is
a subset of Groovy, which is Java-based or JVM based scripting language and you can do just about anything. It’s all sorts of banana stuff going on in Scripted Pipeline. The other part of
the Jenkins project, and this is where some
of the tension comes in. I don’t want to think, don’t make me think about stuff I don’t need to think about. How can I do this as easily
and as quickly as possible?>>Yes, that’s me.>>That is you? So, Declarative Pipeline
is the Pipeline for you. But they’re both
actually built on top of the same sort
of underlying engine. And so, it’s just sort
of two ways to get into the same underlying core
of stuff, right? And so, what I
have on the screen, this is actually
a Declarative Pipeline.>>Okay.>>And so, it’s very
very simple, very basic. And there’s not
a lot of need for you to understand how
Jenkins does everything, it should be fairly
straightforward to understand, fairly straightforward for
everybody in the project. So, when you’ve got designers, you’ve got some coder’s, product manager, and some other ops people
running around, everybody should be able to understand that Jenkinsfile
that gets checked into your repository
because you’re all collaborating on
that software delivery process.>>Right.>>So, if we look at
this one in particular, it’s pretty, pretty basic. I mean instead of having
build test deploy, I’m really just doing a build
and all it is is saying, give me a Docker image
that runs Maven, and then run the Maven.>>So, my eye went
right to Docker. How long have you guys been
running Docker in production? So, Docker and Jenkins have
sort of an interesting past. We started using Docker for our infrastructure
probably four years ago, because Kohsuke said we
should do this thing. And sometimes, I have to make that leap of faith when
Kohsuke says something, and it turns out he was right. It was good for a number of
reasons. But as the Jenkins.>>You heard him, he
said you were right.>>Kohsuke probably hears
enough from me that his wrong. So, it’s good to have on
the record that he was right. But so, we started
using Docker a long, long time ago within
the project itself. And when we started
to develop Pipeline, it became very clear
that Docker as a sort of build and test execution
environment was an ideal sort of thing to
marry into Jenkins Pipeline.>>Right.>>Because if you think about
it like most projects now, I would say, if you
start a project now, you’ve probably got
some front end stuff, and you adjust
you’ve probably got some back and stuff and in Python or Java or
whatever and go even. I hear people go a lot.>>Yes.>>Some people out in
the fringes use Go, everybody else uses Java.>>Getting off
my bum, going home.>>To do this myself. But you have these repositories and these applications
that are being developed where it’s not
just one environment anymore, you’ve got multiple languages
coming together. And that’s where Docker
and Jenkins start to really play well together
because in your pipeline, like what we’ve got here, for my build of
this application, I need Java, I need the JDK, I need to build that. For another stage
of this pipeline, I might need node.js, and instead of setting
up my Jenkins environment to have versions
of Java installed, versions of node.js and
managing all of that complexity, instead what I can say is just use Docker and push that to the team that runs
that application and they can choose what the right things are for their application.>>Very cool.>>Which is one of
the things that also makes pipeline something
whichever camera’s on something you
should use right now. Whether or not you did deploy
Docker into production, I think that’s an
important lifestyle choice. You should consult
your operations team on whether that’s
a good fit for you. For us, it’s definitely made life easier but
it’s not for everybody. But inside of Jenkins, it makes a tremendous
amount of sense. So if I could just jump over to an even more complex
Jenkins pipeline, imagine for a second
you’ve got your simple, I’ve got my pipeline
that doesn’t build stuff.>>Right.>>The nice thing about this being code is you
can start to iterate on that and you check
all of your changes into code or into
the repository. So over time, you might find I’ve now got some
node.js stuff in here, I’m going to add
a stage to do node.js, and that can be
another revision. Or you might find
this is a demo, one of the guys that works on this Blue Ocean Project
which I’ll show you in a second created. You might find you need to do a little bit more
complex things like run browser tests in
parallel branches for Firefox, Safari Chrome, etc. that you can all
start to code into your Jenkins file and do something really
interesting with that. So I think what I would like to do is actually show you some of the Azure stuff
that we’ve got going and the blue ocean stuff
that we’ve got going. James Dumay, who’s one of the folks that
works on Blue Ocean, he really enjoys the color blue, so we try to give him as
much grief as often as possible about
his color choices, but he hasn’t quite
latched on to Azure yet. We’ve used all the
other colors in the palette except for Azure. So because we are
an open-source project, all of our infrastructure
and our code, our Jenkins files,
our Terraform, our puppet, the Java code, frontend code, all of that
is open-sourced on GitHub, but because we are
the Jenkins project, we also use Jenkins very heavily to run
our own infrastructure.>>As one does, yeah.>>As one does.
I mean, if you’re going to Jenkins,
Jenkins hard, right? So this is our actual
live Jenkins instance. If you go to ci.jenkins.io, you can witness all of the crazy shenanigans that the Jenkins project
has running at once. So I’ll show you some of the stuff that
we’ve got going on, like our Terraform pipeline
and this interface, this is Blue Ocean, and this is one
of those big jumps for the Jenkins project.>>Right.>>where all of a sudden Jenkins doesn’t look like it was
developed in 2006, it looks like some
really thoughtful people came together and
built something really, really great, and that’s
what Blue ocean is. And it’s just installed
right next to the normal, the rest of Jenkins. If I go, not dot o, dot io, very important. Not there either. Autocomplete. This is the traditional
Jenkins interface that everybody is
sort of familiar with. Blue Ocean just sits right
next to that and gives you this wonderful, easy-to-use
user experience. So I’m going to go
ahead and run some of our Terraform stuff. Big fans of
Terraform. I’m really looking forward to
hearing Nick talk about some Terraform stuff that HashiCorp has been doing lately. But this is just going to start running and the
beautiful thing about pipeline is if I go to
some of the previous runs, you can start to see how the modeling of your continuous delivery
process gets very, very, I would say, beautifully visualized
and understandable for everybody in that team. So it’s not just about putting the Jenkins file
into source code, it’s got to be understandable
what’s running when it’s actually running
inside of Jenkins.>>Right.>>And so, in this example, in our Terraform code, we have a fairly
linear pipeline. It’s fairly
straightforward but we have some skipped stages
here actually. We don’t need to
prepare every time. We don’t need to, we plan, which is a very useful part
of Terraform where in our case we do a whole bunch of stuff we sort of
prepare everything, and then there’s
a review and then apply.>>Right.>>And so review
is skipped here.>>Yeah, explain like I’m five.>>So for us, our pipeline needs to behave differently for a pull request
than for production. So we want to test our infrastructure as
code as often as possible, and when someone
submits a pull request we also want to be testing that.>>Right.>>And the nice thing
about Terraform and defining some of
our infrastructure as code is for a pull request because we’re just kicking
up to Azure and we can just say give me
new infrastructure in Azure and try to provision it from
scratch with Terraform. And so we’ll create
a whole cluster in Azure, make sure that we can run
all of our configuration, and then if this goes green, if apply goes green,
then everything’s good.>>Got you.>>For production, we
actually want to review, and that’s not something
that there’s some I would say zealots for what continuous delivery or
continuous deployment are. For production infrastructure, especially when you’re talking about a big project
like Jenkins, I really feel
strongly that someone should make the
final call that, hey, this isn’t a stupid
idea or you’re not about to just
delete everything that exists in Azure because that would
be most unfortunate. And so what we do is we
actually just skipped that in this environment. So let me jump over to our production
deployment environment, another Jenkin’s instance
off the public Internet. It’s got all the
good stuff there. And this actually just runs the deployment part
of our infrastructure. And so, where we had
review was missing before, in this environment we’ve
scripted our pipeline in such that we say if we’re in
this special environment, if we’ve got these special
environment variables, that means we’re
actually trying to do a production deployment. And what has happened
on that review stage is we waited for somebody to
actually click a button. And so I could go
back and see that this was deployed by Olivier. And if I get this running, do a production deployment,
well, on camera.>>Please fail, please
fail. I’m just kidding.>>Please do not fail. But what’s really cool about
this is I can watch this, I can merge changes through the normal pull
request flow and get those pretty well tested for dynamically
deployed infrastructure. And then when it comes time
for us to deploy production, we can watch bluish and see when things come out and plan and
with Terraform and saying, are you sure you
want to do this? And then someone
has to say, hell yeah, I’m sure I
want to do this. Let’s do it live and then that actually goes and applies
that infrastructure.>>It’s so 2017.>>It’s very modern. I’m very pleased
with this actually. It took Olivier and I about a week to
sort of finally find the workflow that we
wanted to make work because the Jenkins project
wasn’t always on Azure. We used to live in four different data centers
with different API. And then we partnered
with Microsoft who gave us a whole bunch
of Azure and said it’s a cloud, go crazy.>>You’re welcome.>>Thank you. I
really appreciate it. And what that
forced us to do is, all of a sudden, all of our infrastructure
was an API away.>>Right.>>And so we sort
of stepped back from the way that we were
doing things before and said if we were
going to automate everything from the very
beginning of our infrastructure, so creating a virtual machine, creating the load balancer, all the way to provisioning
and application, how are we going to
actually model that?>>Right.>>Fortunately, it was 2016 and Jenkins pipeline
existed and bluish existed and Terraform existed, Kubernetes existed, all the right tools were
there at the same time. Or you can just
make the easy choice and just bring them together,
which is really cool.>>That is cool. Good choice,
good choice, sir.>>So this is not failing but we’re waiting for DockerHub to pull some stuff.>>Come on Docker,
you can do it.>>So let’s just assume that’s going to complete at
some point in the future. See where my other
stuff is going. Looks like this is also
waiting for infrastructure. One of the things that has been really interesting
about running our Jenkins infrastructure
is we started to open up the Jenkins infrastructure
to more plugin developers.>>Right.>>So there’s over a
thousand different plugins. And so we run
a Jenkins environment for over 1,000 different
repositories on GitHub.>>Oh snap.>>It’s freeze. So, what makes that really challenging is we have got to dynamically and elastically
allocate resources for that environment
and developers are very impatient, myself included.>>You don’t say.>>One of the reasons I
originally harshed on Java was it would take me like 60 seconds to
run my unit tests. I can’t wait for
the JVM to spin up. I need instant
feedback right now.>>Right now. Yes.>>And so, being able to sort of provision stuff on
the fly in Azure with the Jenkins plugins that the Azure DevOps team has developed has
actually been really, really helpful to open up CI/CD for the sort of broad spectrum of Jenkins developers through our Jenkins infrastructure,
which is pretty cool. And there’s a lot of cool stuff
that our team has done. Like yesterday, I
flew up and Arun, who’s the head of
that team was like, hey did you see
that managed community service we launched? I was on the plane
for three hours dude. Are you serious? Like I left and these tools were not there and then
I land and Arun’s like, hey by the way, we just
solved some of your problems. You should maybe check that out.>>I’m just saying Microsoft
does cool things now. So you heard it live. We didn’t edit that it’s
not a voice over, it’s true.>>That is true
and it’s shocking. It’s genuinely like when I started in the open
source community, Microsoft was spelled with
a dollar sign for the ads and all sorts of
other juvenile shit like that. And now, like all these sort of people I respect in the open source ecosystem
are really like, I work at Microsoft
now. I’m like, what?>>I know it’s
crazy, Jess is here. It’s insane. Who are we even?>>It’s a very
different Microsoft.>>It is.>>And what’s especially cool, I actually had
this tab open already, is there’s this team that’s invested in Jenkins at Microsoft, which
is pretty sweet. And so, all of these cool things that are available on Azure, this team is developing
integrations into Jenkins for.>>Man, we’re cool.>>Yes, you’re so cool.>>Yeah.>>Yeah, with
your super cool hat.>>I know, yeah.>>And so, what’s
really exciting for me, because I run Jenkins
for Jenkins people, is there’s tools that have come out which I still haven’t
been able to integrate, which are going to
make their lives and my life a lot easier. Like the managed
Kubernetes service, there’s this integration for
Azure container instances which I haven’t yet rolled out to production, which
is really exciting. And then some of the the deployment stuff
is also really interesting, because the biggest challenge
for a lot of people using Jenkins is that last mile
sort of two production. Like we’ve tested it,
we’ve built it, we’ve stood it up in a QA environment. Now, getting that
into production safely is something that
from the Jenkins perspective, we’d like to be modeled
in the pipeline. But a lot of people
don’t quite have the integrations there to successfully get things
out to production.>>Right.>>And so the Azure integration
with Jenkins is actually pretty promising for
that too, just exciting.>>We have about three minutes. So I want to make
sure that we cover all of the things that
you want to cover.>>There are so many things.>>I know.>>So, of all the things
that we could cover. One of the reasons I
wanted to come talk about some of the infrastructure
that we have for Azure, and not just the plugins
which are all open source, not just Jenkins itself
which is all open source, but our infrastructure
is open source. And so the way in
which we use Jenkins, is open source and can
be copied and emulated. Its all MIT license that
can be offered at will.>>Very cool.>>We have in this repository, all of our stuff is
under Jenkins-infra. We have our terraform plans. And so, the way
that we provision infrastructure on
Azure with terraform, is open for the viewing, which is simultaneously really
cool but also terrifying.>>Yeah. Go break
some shit, you all. It’s going to be cool.>>Do not go break some shit.>>The nice thing about working on
an open source project is you have to think about that.>>You’re right.>>That’s why our pipeline
doesn’t do some things. And when we were working
on pipeline very, very early within the project, there’s some access controls and other useful things
that will prevent, you know, Joe Rondeaux coming in on GitHub and submitting
a poor request.>>Or Ashley McNamara.
Who could say?>>Submitting a poor request that just deletes everything. Like the nice thing
about some of the underlying features
and pipeline is unless you have
right access to that repository, you
literally cannot. But that’s not enough like defense in depth
is very important. So we actually run multiple
Jenkins environments and Joe Rondeaux has no access whatsoever to what actually
touches production.>>Perfect. Classic
Joe, you know.>>Joe Rondeaux is
always causing trouble. So this is still running. We’re applying
some stuff, that’s good. Our deploy is still going. We’re waiting. Docker
hog, you are killing me.>>Oh man, oh man.>>Killing me.>>But one of the other things that’s really exciting
about this for me, is there’s open source
communities that are now sort of springing up
for infrastructure stuff.>>Yes.>>So it’s not just the people
that work on terraform, it’s the people at like the Wikimedia Foundation
or mediawiki, I don’t remember what it is. They have open sourced a lot
of their infrastructure. And there’s a lot of other organizations
that have open sourced their infrastructure which
is really, really exciting. So I just wanted to mention this really briefly, on
opensourceinfra.org, there’s a fledgling group of people that are
collecting links to all of the different open
source infrastructure projects. And so if you don’t like what the Jenkins project
is doing on Azure, you’re curious about
how another project deploys on the aide of
U.S. or Google Cloud, all of that stuff is becoming increasingly open
sourced and sort of cross referenced and we’re
sort of stealing ideas from each other
on how to support open source projects in a better way using infrastructure’s
code, pipelines code. The XS codes, the various ones that we
have available to us now. So all of this stuff
is now another way that Joe Rondeaux and
Ashley McNamara can come contribute to
the Jenkins project for example.>>Very cool. And so
we are out of time. If people want to get in touch with you and learn
more about this, how would they do that?>>So the best place
to really go is the Jenkins IO website
on jenkinsio/participate. We have a full listing of the relevant mailing list,
the IRC channels. If you wish to contact
me directly and tell me all of the terrible things about my infrastructure,
[email protected]>>Perfect. And then
somebody asked how they could get this sweet hats? I don’t know man. I
think it’s exclusive.>>From your cold
dead fingers is what I heard before, right?>>I will fight you Seth.>>So every year at
Jenkins role which we hosted at the end of August in
San Francisco and I think it’s September 2018 in
San Francisco again next year, we have a contributor summit.>>I love those.>>And the contributor
summit is basically where we convince
cloud based and the other conference
sponsors to basically set up a big room with
all the AVE food and everything that we need and have about 50 or 60 people talk
about the future of Jenkins. What we can do
better, where we’re having troubles, and
things like that. And this year that hat was
given to Jenkins contributors. So contributing to
the Jenkins by that.>>Contributes or steal it.>>Or steal it from
me when I come to.>>Yeah.>>That’s one option.>>So thank you Tyler. It’s been very cool. We’re going to go over to Seth now who still
doesn’t know things.>>That’s right. Let us
give them a hand first, if we want the studio
audience to feel like they are doing something. Thank you for that.>>I am actually here with Matt Rock. And
you work for chef?>>Yes I do.>>I went to a chef conf. Probably not this one.
That just happened. But the one before in Austin and I remember that there was a new thing called habitat and I’ll tell you just a short story because I know
your demo is good. There was a hat that they gave out and mine was too small. And in the spirit of
Open-Source I went into a cab with
this dude and he said, Well Ceph my head is too large. And so we swapped
the spirit of cooperation. But during that whole time
I feel like I didn’t really understand
what habitat was. Even though I had the sweet hat which I still wear what
I do gardening work. So why don’t you help
with some my friends.>>What did you swap that
with? So you had a hat?>>I had a smaller hat and
he had a larger habitat hat. So we swapped it. Right. So it was a it was a very special moment
and after that we hugged for a while,
it was also good for him.>>That’s nice. Yes.>>So tell us about
habitat because I felt it was a container
but it wasn’t. But it was more like
an environment. But not that. I feel like it’s a very, like things that I don’t
understand quite yet I feel like they’re super powerful If I just understand
a little bit more. So Matt Rock is going to help us out a little bit
by showing us how to modernize our Java
development workflow with Habitat. Take
it away my friend.>>Yeah. So hopefully I can fill in some of
those gaps for you. So yeah what we’re going to do today is I’m
going to show you, I’m going to basically
give you a tour and go kind of through that habitat ecosystem and
show you how we’re going to modernize your everyday
development workflow, deployment workflow
using habitat? So real quick before
I kind of dive in. So my intent today is to give you that
overview to give you that guided tour and show you
what that workflow’s like. But I’m not going to
necessarily dive into all the nooks and crannies of habitat
which there are many. And so if you’re watching this or if anybody
out there is watching this, I hope people out there
are watching this.>>My mum is on for sure.>>So you know if you have questions or you
mean there are you, and we have Gitter and
Twitter and all that stuff. But you know things if
you light bulbs pop up in your head and you want to
dive more into the details. So www.habitat.sh we’ve got
all sorts of documentation, we have tutorials that
you can go through. We also have a slack channel so that’s a great place to go if you want real time answers
to your questions. And that is heavily watched by actual habitat
core maintainers. So good chance you’ll get
a timely response there. Everything I’m going to show
here today is on get hub. We’ve heard about that. Most of us know about that. It’s in my org and
rock the name of the repo is national_parks. So automate as you mentioned
it’s brought to you by chef. And we’re not going to
talk a whole lot about the actual chef client today but I do want to mention that we now have a presence on
the azure market place. So if you want to
check out chef, if you want to set up automatic deployment
of your infrastructure of your nodes on Azure. Totally. Check out
https//tinyurl.com/AutomateAzure and you’ll get all sorts
of information about that. So, that’s all I’m going to say about chef a lot of
people know about chef they know that chef is a great tool for automating
your infrastructure. The thing that really
differentiates habitat from chef is where Chef the center of automation is you know the servers the
node you and we were known as the people
that kind of take your server from nothing
to what you want it to be. Habitat focuses on
the application. So, what habitat is about
is providing an environment, a habitat if you will, for building your application, packaging your application,
deploying that package to a centralized repository and then having actual what
we call habitat supervisors. So these are your application servers that
are actually running your application
consume that depot and run your application. So, we go all the way from
building your application, to monitoring your application. So, it’s a habitat for that
whole application life-cycle. And so I’m sure there’s
a lot of questions there and what we’re going
to do is we’re just going to do it. Yeah.>>So if you saw me
awkwardly pointing at everybody that’s because
that’s your camera over there. I get that one because
people are like, you’re not looking at me.>>Okay I’m sorry.>>So everyone that
saw me awkwardly point, welcome to my life.
Go ahead buddy.>>All right man. So first thing we’re
going to do so we got to. So, we’re going to
build a job application. It’s a fairly straightforward
application it’s a web app. This is going to be
running on Tomcat. So a lot of Java developers
should be fairly familiar with this kind
of an application. And so we’re going to have
habitat build this for us. And the thing that we
need to do to make that happen is create a plan. So, a plan is going
to tell habitat about our application and
it’s going to tell app. I mean you can tell habitat
how to build our application. So have our plan as you can see here my editor
is called Plan.SH So it’s going to be either planned or SH as it is here if we’re if we’re targeting Linux
it could be planned at P.S. 1 if we were on Windows
it would be PowerShares. But the fact of
the matter is a plan is shellscript So you want
to learn another language. The chances are if you’re a developer you know shellscript. So, you should be comfortable
in this environment. And the plan is usually
comprised of two major sections; one is meta data. So the section at the top
that basically it describes her application and then the part of the bottom which is essentially callbacks
that you’re going to implement to tell habitat how
to build your application. So, we look at some of
these things at the top. We’ve got some basic stuff. So, like Name Our application is called national parks
it’s going to be we’re going to display all the national parks
in the United States. Awesome. It’s we have an origin so you can think
of this as a route of ownership it’s kind of
like like I get organization. This is my personal origin. Em Rock but this may very
well be your company name. We’ve got version dependencies. So this is a big part of habitat is managing
your dependencies so. And what happens that allows
us to do is to split up our our runtime dependencies from a build time dependencies. Because like a lot of us
know as developers There’s tools that we need to build our app but there will never
need those tools again. You know we’re going
to use maven here to build our war file but
we’re not going to need. You know there’s
no need for applications to pull down Navan. They will our application server will absolutely need Tomcat. So that’s one of
our run-time dependencies. And then we have binding. So bindings is an important
concept in Habitat. I’m not going to go into
all the nuts and bolts of syntax around bindings
but suffice it. So bindings is how services can talk and
connect to one another. This is a powerful
concept in Habitat. So, as I mentioned, in Habitat, we have supervisors that run
your application. So a supervisor is
basically a process that starts up your application
and runs on a server. It can run on a container. It can run on a VM. It can run on bare metal.
It doesn’t matter. But the fact of
the matter is, today, in our distributed
application land where we all live, I live
there, I think, you live there?>>I write demoware and I’ve been writing demoware
for the last seven years. It’s really sad. So
I’m really excited. The thing I might think about this, as you’re asking this, is that this feels a lot
more than containers, and containers is
an output of a process. This is more a description of both of pre and
a post process.>>Absolutely. And this is we’re going to see later in the demo. This is going to set us up
really great for containers. So what I’m getting at with the binding is that in
a distributed application, the application is not
the application itself. If we were to just
build this application, we would be very disappointed. I couldn’t call it national
parks because there would be no national parks because
we need a database. So we’re going to need to
bind to database service. Furthermore, this
application is going to take off like wildfire,
well, national parks. So I’m expecting a lot of traffic and I can’t
just run this on one node. So I’m going to
have a few nodes in Azure running this application. So I need a little balancer,
another service. So we’re going to
be using HAProxy and HAProxy is going to
need to be able to bind to us to be able to load us into its configuration so it can be my front end point
to hit and then it will fan out to
my application servers. So all of that is
going to work together in Habitat and we’ll
see how that works. And that’s what
bindings provide. Bindings provide
a way for supervisors to expose configuration
data about the services that
they’re running so that those services
can discover one another and connect and
interact with one another.>>Got it.>>So now, we’ll skip
down to the callbacks. There’s a variety of callbacks
that you can implement. This is a fairly
straightforward one. We’re only implementing a few. But suffice it to say that this is going to tell Habitat how to
build your application. Habitat is basically
technology agnostic. Habitat was written in Rust but it can build
any kind of application. But it doesn’t know
anything about Java. It doesn’t know
anything about Ruby, about Node, about C#. But we know about that. The developer of
this application know, you certainly know how to
build your own application. So this is your opportunity, and opportunity which I
suggest you take because otherwise not a whole lot
is going to happen, to tell Habitat how to build it. For example, we can see here when we do
that in the DoBuild, we’re going to call MVN package. And because I’ve declared Maven as a build time dependency, I know that Maven is going
to be on my path and I can use that tool to
package up my war file. So when Habitat is all
done with this plan file, what it will do is it will
drop what we call a hart file, an immutable artifact on a package that contains
all the artifacts, in our case a war file, that your application
needs to run. So this is a war file
because this a C# app. It might be your DLLs, your exes, if it was Ruby, rb or gems, so all those raw artifacts plus
the descriptive information. For example, it’s going to have your dependencies and
their dependencies the exact versions that
you built against, which is important
because what that means is that when I
build this application, we trust that
all my dependencies are our Habitat
packages as well, and Habitat will allow us
to basically dynamically link against all of those libraries inside of
the Habitat environment. And so what that means is
what we build with is what we run with and that’s
super important.>>Well, it’s also frustrating
because how am I going to say it worked on my machine?>>Exactly. The quintessential scenario
right there that we try to-.>>Yeah. I’m not going
to be able to get out of it not working. Is that
what you’re saying, Matt? I hear you now, buddy.>>So let’s actually do it. Let’s build this application. So there’s are a couple of ways that we could build
the application. One way we’re going to do today is build it
actually locally. So a couple of weeks
ago we launched a new service called Builder. It’s basically
a hosted build service where we can actually
watch your GitHub repo. We can watch push is coming
into your GitHub repo. So as your application changes, we can actually build
your application for you. We can take your
application, package it up, we’ll upload those packages
to a depot where your supervisors can automatically consume
those packages. So it fits great into
your CICD solution. But what we’re going to do is we’re going to
just build it right here because what I’m trying
to lead you through here is the developer workflow. And so we’re developing
this application version 6.2. So let’s build this,
and real quick, a little word about this environment that
I’m in over here. So this over here, this is what we call
the Habitat studio. And this is
a special environment. So depending on
what platform you’re on, I’m on Windows but it would be the same if you’re on a Mac, when you enter a studio, we basically spin up
a docker container that includes in it
a very small Linux distro, very minimal, that has a running Habitat supervisor
and Habitat build systems. So this is where I
can test building my application and then I’ll be able to actually test
running the application. And what’s key is you can
think of this as a clean room. So as I mentioned,
the Linux distro is super small so there is
very small path, very little your user bin. So the idea is that any dependencies and
any tools that I need access to are going
to be exposed to me as actual Habitat packages. And again, that enforces
the guarantee that what I build with is going
to be what I run with. I don’t have to risk
building this and having it linked to some
other glibc off in the distance and then I port this to my live environment and I get the spontaneous
combustion effect.>>And we can spare ourselves the embarrassment of it
like crushing a build or something because
you can run it in a clean room environment
on your local machine, and everybody knows after
six months your machine is, if you installed too many
things, things get weird. And so having like
this really cool local theme environment
is pretty powerful.>>Oh, totally. Yeah.
Yeah. So let’s do it. So I’m going to
type build Habitat. And what this is going
to do, this is going to look for a file called plan.SH inside
the Habitat folder. And so when I entered
into this studio, it mounted the directory
that I entered the studio from into the source directory
in the container. So what that means is
this directory over here on my editor is mounted
over in the container. And so whats cool
about that is if like I made a mistake here
or if things change, I want to change stuff, I can change things
over here and they’ll be automatically
manifested over here.>>So these builds are
happening like when it says installing JDK or whatever, it’s installing it
on that container.>>Exactly. That’s correct. Right. So we saw
the Maven output. Okay. So it’s finished. So now, we have a hart file, our artifact, drops that
in the Results folder. So let’s take a look at
that, lots of builds. I’ve done this
quite a few times. This is the one
we’re interested in, 6.2, and I just copied that.>>That’s the one I was
going to say looked good.>>It’s going to be so
much better than 6.1.>>It’s a good one.>>So now, what we’re going to do is we’re going to
run this locally. Let’s see how this
all worked out. So we do have
a running supervisor here. Let’s see what
that supervisor is doing. Let’s type hab sup status. What are you
up to, supervisor?>>I thought it was
like sup, what’s up? I feel like that’s what
you should have done with.>>Yeah. Yeah. That’s
strategic thing. So I do have a service running. I have my MongoDB service currently running
in the supervisor. That alone isn’t going
to do a whole lot for me. So let me go ahead and start the application I just built. So not only does it
drop this hart file, but it also installs
the hart file, installs the package into
the local Habitat environment. What that means is I can do
this so I can say hab start. Hey, Habitat, start my
application mrock/national-parks, and it passes
this bind argument. So this bind speaks again to the binding we talked about earlier. We’re saying, “Hey, that database binding
we talked about in our plan that is expecting to have a port property
exposed to it. ” This right here
is saying, “Well, you can find something
that fulfills that contract in this service, this NP dash MongoDB service, which is the service
that’s running.” So I’m going to hit enter, and I don’t get
a whole lot bag that basically says
thank you very much. Supervisor started
your application. If I wanted to see more, I can type sup-log which is essentially going to follow the tail of the
supervisor log, sup-log.>>Sup-log?>>So I see some Java E, Tomcat E stuff and I know
things are happening. But let’s absolutely make
sure things are happening. So this is the glory of that, the national parks application. This isn’t running
on Azure actually. So this is my HAProxy endpoint. So let’s change this to localhost and see if
sick point comes out.>>That’s to keep us honest.>>That’s right. And there we go. So 6.2, I have National Parks
which is great. And so the system works. So that’s the
developer workflow. So now let’s say, “Hey, my development workflow
looks great on my machine. Let’s push this off to Azure.” So here is my Azure portal by the VMs that I have in Azure. I’ve got three
application servers so these three servers HAB one, two, and three are
running Tomcat.>>Some good names.>>I thought so, yeah. Thanks. HAB Mongo that is
running a supervisor that’s hosting my database and
HAB HA is another supervisor, another VM with
a supervisor running HA proxy that’s load balancing
HAB one, two, and three. So port 8080 which is what
Tomcat runs on is not Expo. I can’t hit that on one, two, and three but I can
hit that here on my HA proxy which is load
balancing these guys. So if I go over here. Let’s go back. I’m going to
refresh just to make sure it really is running 6.1. So this is old, old 6.1.>>Not the good one.>>No, no. So what we want to do is
we want to, I have these. I have HAB one, two,
and three specially configured to with
what we call an update strategy and that update strategy
is called rolling and what that means is
these three nodes these HAB one, two, and three have
elected amongst themselves what we
call an update leader. So that update leader
is right now watching the Habitat depot and looking for
updated packages.>>Which is those hart files.>>Exactly.>>Got it.>>All right. So if that update leader sees
an updated package, what that leader is going to do, is it’s going to have each
of its followers one by one update themselves and
then it will update itself. And so the power
of that is that we can have a real time update our application node-by-node and not all the same time
because if they all update themselves at the same time,
they all go down same time.>>Chaos. And cats and
dogs living together.>>This is going to be
a no tears deployment. And so, let’s do this. The first thing we need
to do is actually upload our hart file and because
I put it in my clipboard, I can just paste it in. So I’m going to delete that.>>Yeah, we want a good one.>>>Totally. Okay. So it’s
uploaded to the depot. And now, let’s actually
look over here. So here I have, so
these three guys bam, bam, bam are HAB
one, two, and three. And so, they’re actually watching the depot
once a minute. So really it’s
going to depend on-.>>So just like awkwardly stare at each
other for a minute.>>That’s right. Yeah,
yeah, yeah. Look into our.>>Yeah, looking in my eyes.>>So there we go. Okay. So updates are happening. So we’re moving. And once all these guys
are done moving, I’m going to expect to see 6.2. So let’s queue that up. We are here, so I’m
going to hit the refresh.>>And again this is
hitting the load balancer. Look at that.>>That’s right. Yeah, so yeah, I’m hitting the load balancer. So the load balancer
is balancing those three nodes and 6.2. I mean, yeah it is so
much better, right?>>Yeah, I feel like
they are better, yeah.>>Yeah. So awesome. So now in the real world, typically, you’re not
going to be deploying from your laptop to production.>>I mean we shouldn’t.>>Yeah, we shouldn’t.>>But some people
live on the edge.>>That’s true, yeah. I mean,
I’ve been guilty of that. But this is a perfect world that we’re living in right here. And so what we what we can do, we can pretend that these were
actually my staging nodes. And we inhabit, we have
this concept of channels. And so we have an unstable channel and we
can have a stable channel. So right now, I actually have these nodes specially
configured to watch the unstable channel, which is where
things go when you first push them to the depot. But if I felt good
about this, I could do, I could run a single command
that basically just promotes them to
the stable channel. And then my production
servers would be watching that channel which is what they watch by default. So that’s great. I see my stuff in Azure. So now let’s talk
containerization. And we were probably
running low on time.>>We got five minutes.>>Okay. Awesome.
Well, the beauty is that there’s actually
not a whole lot to say because Habitat has already set us up great for
containerization because-.>>We’ll have time to
stare at each other.>>Yeah, I’ve made time.>>Yeah, we’ve
made time for that.>>Yeah, that’s
been a goal today.>>Cool.>>And so in Habitat, we have this great kind
of isolated environment, where the portability,
it makes it easy for us to convert our application
over to a container. And so, Habitat makes
that super easy. There’s really just with
a single command called export. I can export any package
and export it to a variety of different formats one of which is a docker image. And so what happens
when we do that is, basically it starts up
basically a bottom layer. Again that small Linux
distribution that we saw in the studio super small and then it adds to that
the Habitat supervisor. So I’ve got all the
Habitat stuff that I need to run Habitat and
anything in Habitat. And then it takes my package along with all of its dependencies and all
of their dependencies. So the entire dependency
tree takes all of those packages and
adds that to the image. So in the end, I have
this image that can run, it has everything
it needs to run. And so if we go back to Azure, so this guy down here, this bottom right hand one. This is docker host that
I have running in Azure and let’s do a docker PS
to see what’s going on. Probably the last
time same thing happened the last
time I hit docker PS. So it’s running a container, a single container
in my MongoDB. So I packaged, I exported
my MongoDB package. But again, that container is not
going to do me a whole lot. So let’s start up our National Parks container
which I’ve previously, it’s like a cooking show.>>Yeah, of course.>>I previously created
an image out of that. And let me also just
to back up real quick, what’s cool is that builder
service I mentioned, not only will it build
your stuff for you, not only will it upload the
hart files to the depot, but you can also configure
it to then export your package to
a docker image and upload that to
your docker hub registry.>>We literally don’t have to
work anymore it feels like.>>Exactly yeah,
the robots have won. So anyway, so here, if you’re familiar with docker, you’re probably familiar
with docker run. No magic here. I’m
forwarding port 8080, which is what Tomcat
is running on. My image name is
basically the same as my application name
National Parks. And then all the other arguments
are just forwarded onto the supervisor that’s
going to be running inside of that container. We’ve already seen
the bind arguments so I won’t really cover that. But the peers, so this is how supervisors talk to
other supervisors. So we saw locally inside
that studio environment, we saw two services
but they were running inside
the same supervisor. So we didn’t need
this peer parameter. But you could conceivably have what we call a
ring of supervisors, so when you have
supervisors that are connected to one another
that’s called the ring. And there could be hundreds
thousands of supervisors.>>We call them overhead here. That’s a joke. Sorry, sorry boss. Go ahead. I’m sorry. We
had an extra 20 seconds.>>Yeah, we got that. So it’s good stuff. So you could have all these supervisors
running in this ring. So right now I am nothing, but I want to start
up a supervisor. I want to join that ring. All I need to do is give it the IP address or
host name of one member, one peer of that ring and
then it will start gossiping among that ring
and join that ring. Now in this case, I only have one peer. So it’s going to be pretty easy which IP address I choose. Once I’m into 1702 that is I just happen to know.
I know these things.>>Sure.>>That’s what the MongoDB
containers running at. So if I’m going to run this,
so if this is going to do, so now this supervisor
is starting up and we see some Tomcat action going
on with our war file. And so now, let’s go
over to this other tab. So it was broken. So now I’m hitting
my docker host. Now, this is 5.6.>>This is like the ugly one.>>Yeah. But like I said, our builder service can
automatically upload. So if I were to
push this to GitHub, we would see the builder service actually start to kick
in and build this. Then I could just do a docker
pool and I could get 6.2.>>So I’m starting to
understand now because Habitat not only describes your application in
terms of what it needs. It also describes
your application in terms of how it runs
and how it builds. Aside from that,
the output of that is this hart file
which I’m assuming is a Habitat runtime file
that tells you all about not only what you’re running but how
you’re running it. Because it’s so self-contained, you can take that and put
into a docker container or put it into some other whizzbang G which is going to
come out sometime.>>Exactly. Yeah, you can
put it in a container, a VM. I mean, you could have
supervisors running Windows, supervisors running Linux
talking to one another, and pretty soon
you’ve got cats and dogs living together
and it just gets crazy.>>You know what, I mean, it’s amazing. Matt,
this is really cool. I feel like I’m starting
to understand Habitat a bit better because it feels like containerization while it is something that is can
be part of Habitat. It’s definitely not
the entire story.>>Yeah.>>Where could
people go to find out more about this
before we close up?>>Yeah. So as I mentioned earlier, and
here’s the slide again. So habitat.SH that’s going to have all of our documentation. It’s going to have tutorials. It’s going to have a link to that builder service that I mentioned before
which is totally free. I want to make sure
I get that out there. Then slack is very active. So if you have questions, you want to ask in real time, if you’re hitting a wall if
something doesn’t make sense. Or something just seems broken. The chance, well
not a good chance. There’s a chance
that it is broken. And there are people
standing by that can fix it. So it wouldn’t be the first time
that somebody report a bug there but we can get
stuff fixed very quickly. But more importantly,
we can communicate concepts and get you on the right track. And
that’s the place to go.>>Well thanks so much Matt. This has been very instructive. We’ve been learning
about Habitat, how to modernize your workflow. We’ve had a java app.
But as you can see, this is agnostic to
the types of applications, even dependency,
even everything. Thanks so much for
spending some time with us.>>A pleasure.>>We’ll give you a little bit
of a hand and we are going to toss it over to Ashley.>>Thanks so much.>>Hi. I’m here
with Nick Jackson, and we’re going to talk about
Terraform and Azure today. And I’m also again standing on a box while
Seth gets a chair. So all I’m saying.>>It’s unfair.>>It’s unfair. It’s you know,
you heard it here. Nick is going to show us
some really cool things. And he says his demo
cannot fail.>>I said it might work.>>Okay. That’s very different. Also he can’t code and
smile at the same time. I’ve also been told so.>>Yes if I look angry, it’s not because I’m angry, it’s because I’m not very
good at multitasking.>>He’s literally
never been angry.>>Ever.>>Ever. Ever. All right, let’s do this man.>>Awesome.>>Oh, one more thing.
I want to point out for Tyler who’s sitting in the other room that Nick has
some Go stickers on here. There’s another Go developer
in the room. Tyler.>>Yeah. Go is awesome, everybody should learn Go.>>Truth.>>So what we can
do is have a look at a little demo of
Terraform and how we can use Terraform to create a Kubernetes Cluster and deploy something
to Kubernetes>>.>>Oh, wait. What is Terraform?>>Terraform is like
an amazing thing, and it’s glued to
your eyes because I’ve got some slides about that.>>Do you?>>I do.>>So are they corporate
slides? I can’t wait.>>They’re the thing
that pay my wages. So I think it’s
interesting because HashiCorp is pretty well
known for its product. So, we’ve got like Vagrant is one of the things that
most people started using.>>Correct.>>And then we kind of
build out that with Packer, and Consul, and Nomad
involved, and Terraform. But the company was
actually founded back in 2012 by
Mitchell and Armon. And the kind of
the concept was that they were looking
at the market and thinking there’s got
to be a better way to be dealing with the problems
that we’re now facing, which is kind of more
distributed systems moving to cloud base, but even potentially
kind of looking at like cloud migration and all sorts
of things like that. So, a lot of people kind of associate or know
the names of the products, don’t actually
realize that they’re all managed by HashiCorp. So there kind of there’s two problems that
we kind of see in. And that is, like
these two problems are not mutually exclusive.>>Right.>>So you’ve got organizations
who want to be able to manage and migrate some of their applications
to the cloud. But then there’s also
the other problem which is application developers are getting a lot of
pressure to deliver faster and deliver
with higher quality. So the kind of the tooling tries to
cover both of these areas. So we have things like Terraform which
allow you to do that, that lift and shift, where you can move
your infrastructure over to cloud where you can define it in a very
sort of declarative way. So it’s reproducible. You can put into
source control and versioning, which is all like
awesome because it’s a method the developers have been used to for a long, long time with
a source code, right?>>Right.>>And it’s also
about adopting Devops. So it’s about getting
operations and developers to work together to solve the problems
that we all have. I mean like, infrastructure is not just an operations problem. It’s something
that we all share.>>That sounds like magic, like what you just said.>>It’s magic.>>It’s okay.>>No. It’s declarative magic. So it’s reproducible.>>Perfect.>>It’s awesome. So into that, that like kind of like
cloud adoption thing. You’ve got your
traditional data center, and then that’s kind of made up with your application servers, maybe your databases,
your networking, and stuff like that. And what people kind of want to start doing is taking
these core infrastructure, taking application platform
and the security. And security in a sort of the private data center
more often than not, it just kind of thinking about things just
like a firewall.>>Right.>>But you then sort of start to try and migrate that
over into the cloud, and the first place that most people look at
is infrastructure. So that’s kind of
like the first layer to kind of tackle the problem. And then once you get
your infrastructure in place, you start asking well, how do I start running
these applications. And these can be a combination
of modern applications. So, it can be things like dockerized or containerized
microservices. But more often than not as well, you’re dealing with kind
of legacy applications or some of the things that we just sort of
seen with Habitat and like a lot of Java
based applications. You need to figure out how
you’re going to get that to run on your modem.>>I think you got
your clouds mixed up here. I think Azure is supposed
to be before AWS. I’m just saying that if you
were to redo this slide, Azure cloud should
be before AWS.>>I think we just
did it alphabetically.>>Okay, okay, okay. Okay,
we’re good. Forget, no beef.>>So then, I’ve
lost my place now. I’ve completely lost
track of where I was. So we start putting
the applications in place. And then the next thing
people start looking at is, it’s like well, the old concept of just having
this perimeter security, this firewall is not
really good enough anymore. So, have to start thinking
about how we can have a more modern sort
of approach to security on our new systems. So we kind try to
break this down into these sort of three areas. Which is run for
your applications, secure for your infrastructure
and your applications, and then provisioning which is like Terraform and we’re
going to look at today.>>So the security layer there. Can we talk about
that for a second? Because security
is very important. And I think that
many companies have different definitions
of security. So you want to like dive
into that a little bit, what that means to
you guys at HashiCorp.>>So I think for us it
means like zero trust. So, it’s sort of looking at everything as
a potential threat, and trying to deal with that on a very sort of micro level
as opposed to just sort of wrapping everything up in this perimeter security, because I think one of
the things that we see from a lot of the recent
attacks that come out, is that perimeter security
is not good enough. I mean, people will find exploits in
application frameworks, and then they’ll use that to be able to bypass
your perimeter security and get inside of your sort
of clusters and networks. And once they’re inside, if you don’t think
about how you’re securing things like secrets, passwords, database
credentials, access, running things like TLS
between your services, then it just makes it very, very easy for
that potential attacker to then exploit anything
that’s going on there. So it’s interesting
you say because we got a really great product which
deals with security as well.>>Your soothing British accent makes me believe
everything that you say so.>>it’s absolutely true.>>Yeah.>>In addition to that though, what you also have
is a necessity for being able to connect all
of these parts together. So, generally,
the first problem that people find out when they start working with
microservices is well, how do these things
talk together. So, like communication
is easy enough. You can just use HTTP TCP. But, how do you do
service discovery? How does one application
know the location of another one in this kind of dynamically
shifting environment? So, there’s kind of
the connect layer where you can provide that capability,
is incredibly important, which is where we kind
of bring it down to the open source products
which is for run as Nomad, which is sheduler
like Kubernetes, we have Vault to
manage your security. So all of those things like TLs, secrets, and all of
that sort of great stuff. And Terraform of course, for provisioning that
infrastructure which is the thing we’re
going to look at today. But then, you have Consul
which kind of sits as that fourth layer and can manage things like your
service discovery, configuration management,
and stuff like that. And in addition to the sort
of the open source tooling, we also offer the enterprise which has additional features
like Terraform, work for low management,
and replication, and high availability
which go beyond the open source. And
that’s the end of the.>>That’s the end
of our slide show. Does that mean it’s demo?>>It’s the end of
the slide. It’s demo time.>>That’s cool. All
right, let’s do it.>>All right. So
what we’re going to have a look at is building up some configuration
in Terraform to create a Kubernetes
Cluster in Azure. So I’m going to try
and go through and explain the kind of the concepts
and the bits and pieces. So if there’s like anything
just like stop me and->>Hope we cannot see your.>>Oh, you cannot see my
demo.That’s okay because->>Life is cool.>>It’s probably going to work better if
you can’t see it.>>Exit out of
PowerPoint is what. Oh, look, look, it’s right
there. How about that?>>It’s been there all time.>>Good job. Yeah. It’s
been there all the time.>>So the first element that that I want to kind of add is a thing called
remote’s state. And state’s really, really
important in Terraform because your configuration
defines what we will be, whereas the state
defines what what is. So then obviously,
the difference between that is what’s going to happen. So managing state
is really important, and you kind of don’t want it
lying around on a computer, you kind of want to put
it somewhere safely, and I think remote state is an excellent way
to kind of do that. So we’re going to use some container Blob storage
in Azure to manage our state, and we can do that
just by adding this configuration block here.>>That looks easy enough.>>It’s super easy, cut and paste straight
from the documentation. And then the next thing we
want to add is the provider. So the provider defines
the kind of the toplevel. So you have a provider, in this instance it’s
azurerm for the azure API. And then you have resources
which kind of sit in there. So, when we set about
Terraform configuration, we need to be able to set up our provider with things
like the access keys, and the subscription,
and stuff like that. So, let me just quickly
add those in there. And again, I can’t
multitask, so.>>Well, luckily nobody is looking
at your face. So we’re good.>>I mean except for the
audience, they can totally you.>>Oh, they can totally. So, automating is, I’m just replacing
the placeholders I’ve got here with some variables
which I’ve defined. And the kind of
this syntax that I’m using, where I’m using
the dollar, and then, this sort of curly bracket
is interpolation syntax. And what that
allows me to do is, it allows me to replace one. So, when I run Terraform, it’ll replace the contents
of this dollar curly bracket with
the Interpolated value which is going to
be the tenant_id, which is variable because
its prefix with Vado. And I’ll explain
a little bit more and so, kind of finish this up. But, variables are super useful. They allow you to make your configuration
really sort of dynamic and reusable. And I think this is
kind of a key thing. You want to kind of try and
make configuration reusable. You want to make it modular. So, you can share, you can kind of do that
right once you use many.>>Yes, I like that.>>Because this is
similar as I do a code.>>Yes.>>And it also
means you can just download a whole bunch
of stuff from things like the Terraform module registry and stack overflow.>>Oh, that sounds lovely.>>Which is the perfect
development flow. So, for example, here,
I got subscription_id.>>Right. And in the subscription_id, I’m defining a variable. So, what I’m going to
do to use a variable in Terraform as I’ve got
to define those variables.>>Sure.>>So, this is kind of
like the definition. You can see unlike
some of them like the SSH key public and stuff, I’m assigning a default value, but I don’t have to
assign a default value. What I do have to do though is, that there has to be a value for this variable when I come
to run my configuration. And I can kind of provide
those in a number of ways. So, I can inject them using
the command line flags, I can provide them in a file, I can use environment variables, or I can kind of use
a combination of all the three. So, it’s very fully flexible
so that we’re kind of taking into account all of the various different workflows
that people want to use. So, you might want
to use environment variables if you’re using CI, or as an organization,
you may say, file-based is
the best way for us.>>What’s best practice?>>I think it depends on what
you’re going to be doing. So, I think, you’ve got
some secrets in here, so, I’ve got things like client_id and I’ve got client_secret. I don’t really want to leak. So, I don’t want to
be committing that to a public GitHub.>>Right.>>So, what I like to do is, I like to use
environment variables. And one of the sort of there’s a utility for my code dir (nth). And what that
allows me to do is, it allows me to
have an environment variable file per
sort of directory, so, every time I change
into a directory, it’ll set these
environment variables and I can make sure that that file
is omitted from from GitHub. So, it gives me that kind
of workflow and then, when I’m running on CI so, if I’m running this job in Jenkins or something
like that, then, I can just provide
those environment variables into the Jenkins job
or if you’re doing something
more sophisticated, you could use like Voltron and all sorts
of stuff like that. So, there’s a number
of ways to do it. But I think the key thing
is make sure that they don’t leak anywhere
public otherwise, you’re going to end up with a bill that you
don’t want to pay.>>Wait, what if you’re
silly Windows user?>>You could probably
use some stuff or you like I’m thinking you can still use
environment variables.>>Yes, you can do that.>>You can do
stuff. I think it’s a great opportunity when we’re
talking about open source.>>Yes.>>So, if DIRM doesn’t
work on Windows, then, it’s a great opportunity
for everybody out there to go out and pull it
into windows right? Yes and maybe right
and go as well because it’s actually
cross-platform.>>Yes. Hey you Tyler.>>Move.>>JK love you Tyler.>>So, that’s
kind of our top level. But what we want to do is go and create some resources. So, let me just, sorry, this multitasking
is really difficult.>>Yeah. I get you.>>So then, I’m going to create a resource group and that’s kind of give me that top level for the various different bits
and pieces I’m going to create in Azure and.>>You keep saying I don’t
need to do something, but I’m going to do something.>>Yes. And this
might already exist. So, you might already have a resource group
that’s provided to you by your sort of
operations team or something. I don’t have one and I need
one for my configuration.>>Got you.>>So, I’m going to
to create it here.>>Except it’s optional.
I’m not going to do it.>>So, you’ve got to
do it. You need one.>>Okay. Yes. Sure. You do need one. You
might not need to create.>>All right.>>So, the syntax
for a resource, and a resource is literally
a resource in Azure is that you have
the keyword resource then, you have the type. So, in this instance it’s as
your AzureRmResourceGroup. And then, you have
the name of the resource. Now, the name is not what
it will be named in Azure. It’s a name that’s going to
be used in Terraform too so, you can reference it later on. I’ve imaginatively
called this, default, which is an appropriate name because I literally only
going to create one resource. But the names do
have to be unique. So, if I have
two resource group types, I can’t have them
both called default.>>Right.>>And Terraform will
complain and say, you’ve got to give
them unique names.>>That’s good practice.
Thanks Terraform.>>So, again, we can
use interpolation and I’m going to
use the namespace, as the name my resource group, and then the location which
is the Azure location, which I am using West US too. I’m just going to specify
that in there again. So, again, very simple just
using the interpolation, this is creating this,
it’s very dynamic. I can provide a lot of
this information at runtime.>>And then, onto
the complicated stuff which is really not
that complicated. Thanks to some
lovely people who’ve created the Managed
Container Service.>>Yes.>>So, what we’re
going to do is, we’re going to add
our Kubernetes. Oh well. Look at
that. How about that?>>And this block is pretty much literally cut and paste from the Terraform
documentations. I’m not smart enough to
figure this out myself so.>>Sure.>>I think stealing code is
a much better approach to.>>No. Why write code
that already exists?>>Exactly.>>Why do that?.>>So, we can again
just go through, fill in some of
the details of namespace. What did I call this? k8s_cluster_name.
So what this is, I can actually use
multiple interpolations inside, so I’m going to create my name, which is hyphenated name space, and then the ‘k8s_cluster_name.’ Location. Again, I’m just going to
use my location variable. Now the resource group. So, I need to provide
the resource group name. And what I could do is, I could hard-code this, or I could literally use
the same name that I used in the previous block when I
created the resource group. But what I can also do is
use the interpolation syntax, so I can reference the name of the resource group,
it’s the type, so it’s
‘azurerm_resource_group’ and then I’m going to use
the name that I gave it, and I imaginatively
called it ‘Default.’>>Perfect.>>Then I want to get the attribute from that
resource group, which is name. And you can find all of
the various attributes and the outputs and inputs on the the Terraform
documentation, and they do kind
of loosely add here to the Azure API if you’ve
been using that before.>>I want to add to
that Terraform documentation is actually really great.>>Yes, it’s really great.>>So, congratulations on that.>>Thank you.>>Great documentation
is important.>>I will take full credit.>>As you should.
Why wouldn’t you?>>So, the other
interesting thing about doing this interpolation
in this way is, because I’ve
referenced the name of the resource group in
my interpolation syntax, I’m actually giving
Terraform a hint to tell it that it
needs to create the ‘resource group’ resource
before it goes ahead and creates
the container service. So, by building up your sort of configuration and
using these links, Terraform is able to build a graph of all of the various
different dependencies, so it knows what it can create ahead of time and
what it can sort of delete and all of the various different bits
and pieces like that. And then, again,
I can go through, fill out all of the various
different bits and pieces. I think this key data is
something that’s worth noting, because what I actually need to provide
in the key data is the contents of
my SSH public key. So, in addition to sort of just using
interpolation for links, I can actually use some
of the inbuilt functions. So, I can say I want
to use the file, which is going to load
the contents of the file, and then I can sort of
give it the path which I’m defining in my variables
as ‘public_key.’ Like that. And then,
what that will do is, when it run time, it will go read the file
and it’ll insert the details into that attribute. And that’s pretty much all you need to do to kind
of set up Container Service. So, I’m going to literally do, I can run Terraform plan, Terraform apply, and we’ll
see that up and running. This takes about
15 minutes to run, so I’m not going to do that, I’ve prepared one earlier.>>Yes!>>But what I can
do is I can very, very quickly show you how we can deploy an application
to Kubernetes, which again, is going to
mean you have to stand and watch me typing
a whole bunch of stuff again.>>I enjoy watching you type. I don’t, I’m just kidding.>>So again, this is an interesting element because I said I was using
remote state in the other one, which means that the state is stored in container storage.>>Yes.>>Now what I can do, because in my application configuration I need some of the details from that state so I need things like my Kubernetes certificate
so that I can connect to the cluster and I need like the address
of it and things like that, because it’s remote state, I can actually read the remote state from a
separate piece of configuration.>>Nice.>>So, yes, it gives you
this really nice ability to have a core infrastructure and then an application
infrastructure. So, maybe your core
operations team are going to manage
your main clusters, but your application developer should be able to get
on and just kind of write their own configuration
to deploy to that cluster, and remote state gives
you that capability. It gives you that read access, but it doesn’t mean
that you can kind of mutate the state so I can’t accidentally delete the core infrastructure
cluster, which would be>>Unfortunate.>>Yes, it would be a bad thing.>>Yeah.>>So then, what I can
quickly add is my backend, and I’m using
Terraform Enterprise, so in the same way as previously I was using
Container Service as my backend, I’m going to use, for storing my remote state, I’m going to use Terraform
Enterprise for this one. And then again, specifying the provider for Kubernetes. So, this is the same as
the provider for Azure. I need to give it
some certain information, such as the host, the clients,
certificates, and stuff. And I can fetch that
out of the remote state. So, I can use the data elements
and then I can say ‘terraform_remote_state,’
and then the name of it so I want ‘core’, and then the element that
exists in that remote state. So, in this instance,
it’s ‘k8s_master_dns.’ Then exactly the same thing for the other elements. Data. Dot.>>I’m going to let you type through this and stop bombarding you with questions
for the sake of time.>>Yes, I could have just cut
and pasted all of this in, but then that would have probably meant
the demo would work, and everybody wants
to see a failing demo.>>I certainly do. Yeah. But I do feel bad
about the dead air. I do feel bad about it. I’m not going to lie. I’m not a good singer
though, you guys, so I’m not going
to sing you a song.>>I’ll try and be as quick.>>Ken wants to know if you can smile
while you concentrate.>>No.>>No.>>I can’t do that.>>No. Absolutely cannot, Ken.>>That would be multitasking. So we create that. Then we can add
our Kubernetes application. So again, I’m using
Terraform to provision that. I’m going to create a pod using the kubernetes_pod
resource and this kind of loosely translates to the sort of
the YAML files you might work with if you’re
using Kube control.>>Are you missing ad R in
Terraform, Nick, somewhere?>>Possibly.>>Computers are hard. I get it.>>Hopefully, it’s going
to tell me before I>>That would be nice.>>commit that. So again, filling in the things, I’m
going to put the image. I’m going to use
a very simple http-wcho, which is just going to pretty
much glorify Hello World.>>Perfect.>>And I’m going to
spell that correctly. I’m going to try and spell
that correctly again. The name of it, I’m just going to
call it http-echo. And I need to specify
some arguments for the start up arguments
which is “listen”, and what I want it to
http-echo which is Hello.>>Hello with a W was fine, too. Put some flare on it, Nick.>>Hello Open Dev. We specify the container port and then that’ll create the pod. But if I want to create the pod, I also want to expose
it to the world, so I need to create
a Kubernetes service.>>Sure. You may
want to check on remote state for that extra R.>>Oh, I’m going to check on it.>>All right, cool. I’m worried about the R.>>Again, we’re just going to
create a service, fill in the namespace and then the application link. And again, this is using
the interpellation syntax so I can specify
kubernetes_pod.app.>>What snippet manager
are you using?>>I use Alfred, which is pretty awesome, because I’ve got
a memory of a goldfish, which means that I can’t
remember a single thing. So that’s all done there. Very, very finally, let’s
add a DNS entry record, and I’m going to
use multi provider. So I’m going to use DNSimple to be able to do my DNS records.>>They have some of
the best stickers. I just want to
shout out a Simple>>I don’t own any of those so if they want
to send me a care package.>>They’re so good.>>So DNS TLD, the app link which again is going to be
a link to Kubernetes. So it’s going to be
Kubernetes server.>>And then you
should change it to Z?>>They should just be all changed to K8 so I don’t
have to type as much. Load_balancer_ingress, that one refers to IP address.>>And you might want
to check balancer.>>Totally. Okay, something like that. And then let’s see. Where did I spell
that Terraform wrong? Oh, there. There we go.>>You’re not helping.>>I’m not.>>Okay. Oh, there’s that
R. Oh, thank goodness.>>So then, now
I’ve got that done, I can just do a quick check
so I’ll run a Terraform plan. And that says it’s going
to go on the way and it’ll create three resources
which is cool. But I don’t want to run
this from my local machine so I’m going to
add this to GitHub.>>As one does, yes.>>And I’m going to
push it and I’m going to let Terraform
Enterprise do all of the planning and applying
for my central location updated stuff because cohesive commit
messages are important.>>I agree.>>And then I’ll push that.>>Yes, it kind of work.>>That is the question.>>Look at that. Look at that. It’s
in the dashboard.>>So then, that’s picked up. So it’s running. It’s also telling
me that we’ve got a brand new version
of Terraform.10.8, which literally must have just been released this morning.>>Oh, what a coincidence, Nick.>>Which would be
amazing because I’ve now got something to play. So you can see that, again, it was the same thing
that I ran last time. So the plan, I can
see that in here. Now the workflow of
kind of TFE allows me, as a developer, to push that. Somebody can validate
and then I can just kind of say, “Okay. Looks good. Hit confirm and apply.” And then Terraform
is going to go away and it will provision
that part in Kubernetes. Fingers crossed, pray
to the demo gods, and this is going to
take a couple of minutes.>>Perfect.>>I’m not very good
at singing either.>>Yeah, you’re not.
You’re not very good at singing either. I
agree with that. Okay. So we talked a bit about
Terraform Enterprise. Can we dive into the details
for the poor people like me who don’t want to use the Enterprise
version because I want to be with the people?>>Yes. So actually, the Terraform commands and
all of the configuration that I ran is part
of the Open-source. All that Enterprise does is adds an additional sort
of workflow layer. I mean if you’re
not an enterprise, then Open-source is probably
more than good enough. And you can still kind
of use Open-source with things like Jenkins and GitHub pool flows and stuff like that. So you can you can get
all of those features. I’m literally just showing
TAFE because I was too lazy to set up a Jenkins server
this weekend. But we don’t, in any way, kind of cripple
Terraform or any of our Open-source
products which kind of force people to upgrade.>>Yes.>>The capabilities are there. Generally, if you
are an enterprise, and you probably need
those extra things and then that makes it a sort
of a sensible upgrade. But the basic stuff like in the Open-source
is pretty amazing. There’s no way like feature crippled or
anything like that.>>I love hearing that. Also at HashiCorp, you talked about Terraform being
an Azure portal.>>Yes.>>I saw that you had that up if you wanted to
show it real quick.>>You can, actually,
from the cloud CLI. Which one should I use? I used Bash in Linux. It creates a storage. So you can actually
call Terraform based straight out
of the Cloud CLI. Terraform is already loaded
in there for you to use.>>That’s rad.>>Yeah. It’s super cool.>>I mean, oh, snap. I’m just initializing
my account. I don’t know which is. We kind of found two parallel processes
which were long running, what we had to wait for.>>You know, Seth, nowadays, the kids
say, that’s totes cool.>>I’m old, so I
was going for 90s.>>Yeah.>>Yeah.>>Yeah, you did that.>>Yeah, I thought I went there.>>So what this is, because Kubernetes is going to create
the load balancer in Azure. So that’s going to take a little while to
provision those services. But it hasn’t failed
yet which is for me, oh!>>Oh.>>And it worked.>>Oh my God, it worked.>>I shouldn’t be
surprised here. I’d be like, “Yeah.
Of course it worked.”>>No. I’m happy. I’m sad, I wanted it to fail.>>Everybody wanted it to fail.>>We all did.>>And look, it really, oops. It really works as well
because Hello Open Dev. Oh, and I spelled
Open Dev wrong.>>Yeah. Well, you spelled a lot of things
wrong today, Nick.>>I’m pretty good at
spelling things wrong. So that’s pretty much it. That’s Terraform in
quick 30-minute overview, but you can go ahead and create your managed Kubernetes
service on Azure. You can write in the Terraform configuration for your deploys. There’s no magic.
It’s all declarative, you can commit
everything to GitHub. Full version control, more important in
version control for me is the capability of git blame. Now, I don’t advocate git blame, but it’s a thing and
it’s incredibly useful. Put your config in
there and you can see who screwed up
your cluster when they created a bad config.>>Yes. So we have about
four minutes left. What are some
great resources for people if they’re
just getting started? And also, I don’t know, if people have been
using it for a while, what are some weird resources that maybe they
don’t know about?>>So I think there’s one
of the things we announced at HashiCorp is
the Terraform registry. And what we’re trying to do with the Terraform registry
is bring all of those community modules together so it gives you the capability of just
kind of saying hey I need some resources like
a load balancer in Azure. So rather than write the
Terraform configuration by hand, you can just leverage
a module which the Microsoft engineers have written for you and then you just load
that in as a module, it’ll give you all of
the various different sort of resources that go into
creating a load balancer, and you can build up your infrastructure using
this modular configuration. And again, because it’s modular, you’ve got the
capability that you can update modules and you can do things like versioning
modules so it’ll give you a starting point
to look for things but also it will get you up and
running super super quick. So definitely check
the registry out. I think the Terraform
website itself has got pretty comprehensive
documentation on all of the provider and you actually
find documentation on all, kind of like 70 providers
or something in there now.>>That’s extensive.>>Yeah, but you’ll find
all of the things in there. And likewise, with the registry, what are going to
find is a bunch of cheat sheets as well. So if I wanted to create
an Azure MySQL server, chances are I could
probably just cut and paste this block
into my configuration, run my Terraform plan,
Terraform apply, and I will have a mySQL server in Azure which is just awesome.>>That is awesome.>>And I think occasionally
we’ll do webinars. I think we’ve got
a webinar tomorrow where we’re going to look at
some best practices around.>>Cool.>>Terraform.>>Where can people find that? I’m sure that
you’ll tweeted out.>>Yeah, we’ll
definitely be doing that. But if you go to
the HashiCorp website. We’re also hiring as well.>>You are also hiring as well.>>We are also hiring.>>All you people in
the audience know. Don’t do it.>>Yeah. So there’s
a webinar tomorrow about the best practices around collaborating on
infrastructure as code. And if you hit the HashiCorp
website and go to resources, you’ll be able to find the free registration button there.>>Very cool. And
on the website, you guys also have a spot
for use cases, right?>>Yeah.>>Very good.>>Yeah, so
the HashiCorp website is kind of a link-off to all of the independent
documentation sites for Consul, Nomad, Terraform and Vaults
and things like that, so it’s a good sort
of landing page. We’ve also got some, I mean I really like some of the things that we’ve put on. So if you’re thinking
about sort of adopting cloud or if you’re kind of looking at a way of
sort of saying, well, hey, how do I sell DevOps, we’ve got a whole bunch
of white papers as well that you can
sort of download, rip off, and pretend it’s
your own information.>>Yeah, you guys were really thoughtful with your website. Appreciate that. And
we are out of time.>>You have a guest
though, don’t you?>>We do have a guest, but.>>It’s OK, we’re
all friends here. I want to hear what
she has to say.>>I do too. I feel bad for making Vicky sit in there.
Vicky also has a box.>>I’m just going to
awkwardly insert myself.>>Please do that. Hi. How’s it going,
Vicky? You sat over there. Well, Nick just thrown on, he just took all your the time. Let’s go. Sweet
keyboard, by the way.>>Thanks. You too.>>I know, I know.>>Cool. So I’m just going to
show you how we use Terraform on Azure in OpenAI. And we totes use all the things
that Nick just said.>>Totes.>>Is that a New Age thing
too that I need to learn?>>Yeah. Totes.>>Yeah. Cool. So, quick
thing about OpenAI. We are a nonprofit
research lab and we – OK.>>Being live is cool.>>We do a lot of
basic AI research with the goal of safe artificial general
intelligence in mind. So onto that end,
more concretely, we’re working on a lot of different projects that we think will push us towards different breakthroughs
in AI and various fields. So one of the biggest ones
that we’ve recently released is the
game-playing Dota bot. We had the bot play in an international
tournament with pro e-sports players and we won.>>Nice.>>And then like some
of the other ones that we’ve kind of talked
about are robotics, and currently we’re
working on manipulation and we’re also doing
safety obviously.>>Yeah, all those things are
fascinating yet terrifying.>>Yeah, they are,
on many levels. So one of the things is
their demands on infrastructure because we have so many
projects, they’re so different. So one of the things
is how workloads differ on scale depending on the lifecycle of
the research that they’re in. So, when you’re
a researcher starting out, you’re really
playing with a couple of toy experiments and maybe you just need like
eight or 16 cores or something.>>Right.>>But then going to
scaling up to play Dota, so we’re going to
start to talk about tens of thousands of cores
for one single experiment.>>Wow.>>So the infrastructure
has to be super flexible. And because the
different teams are on different stage
of the lifecycle, then we need to make sure that if they start scaling
up or maturing, we can just bring up a whole new infrastructure catered
to their needs, and that’s why Terraform
is kind of like a godsend because we can copy this module and replace it with some different variables
and launch a cluster.>>I think her story is
cooler than yours, Nick.>>I was leading up.>>So I wanted to show a little bit
about how we use Terraform and we have
this central Terraform module, let’s just call
that the cluster. And within this module, it references a bunch of submodules that
bring up the network, the storage and
all the peripheral things that you need for
Kubernetes cluster. And we actually built
our Kubernetes cluster from scratch because
we wanted to make sure it’s flexible for research.>>Overachievers.>>Well, we also just because we started
using Kubernetes when it was kind of still
a baby project, and so all the things
weren’t there, and then we just
kind of kept going. And then you can see
we use the same module but just plug in like
different credentials and different VM configurations and we can say half
like a dev cluster, a CPU-only cluster,
GPU and so on. So now we think we regularly operate
three to four clusters, and whenever there’s a deadline or whenever we
might bring up more, and then we have
so many clusters.>>So many. Oh look, parrots. Why does so many people hate that parrot? I love that parrot.>>Yes. It’s all over our Slack.>>As it should be. The parrot’s fantastic.>>Yes. So, let’s see. OK. So, just very quickly, this is how our Terraform module looks in our dev cluster. So we have the Atlas or the Terraform Enterprise backend
and then we have also referenced a separate
Terraform project that we depend on with
our core infrastructure. And then we have the cluster
module that brings up the Kubernetes masters and all the networking
and stuff like that, and then we configure different types of workers
that we’re going to use, and of course we end
with a Jenkins module.>>Man, that’s beautiful. Look at that.>>So, OK, let’s see. I hope no one’s changed anything in
the cluster at this point.>>Oh great. Oh cool.>>Yeah. That’s totally.>>Oh. Is it what happened? I changed it.>>This is how programming
really works, guys.>>it’s like bursts
of screaming red text.>>Yeah.>>Yeah. And hopefully no one on the team is doing
anything on this cluster. Go. Okay. So, requests
is ready to go. And, I don’t have anything
running right now. So, I guess I can
use run a thing, like kind of, on the toy
and of the spectrum.>>Yes.>>What people do on
our infrastructure. So our internal infrastructure
team is very small. And that’s why Terraform
is great because we can like merge
our infrastructure as code. And basically, anything
that people bring up like manually that’s
not in the code is like totally
cool to delete it. So, we usually
make sure everyone writes their
infrastructure’s code. And then, we are like those like
an internal product so the researchers use our infrastructure as a service
and then they interact with the Kubernetes
API directly. So, let’s see. So, I’m just going to run this one experiments. It’s a lot of layers.>>Many. So explain what’s
happening right now.>>So when we kick
off the experiment, we first do docker build
and then docker push. And most of the time, well because most things are already cached so
that’s pretty quick. And then we start, we launch jobs on
Kubernetes and services, and we also bring up
the TensorBoard so people can in real time monitor
their experiments. So, let’ see. So we have all the
things that are. We’re not on 1 8 yet. So, let’s see what it can do. I just want to make sure that
we can launch this quickly. This is our DEF clusters
so. So, hopefully things can get scheduled.>>This is all very
fascinating actually.>>Well let’s see why it’s
not scheduling right now. Of course this
happens right now.>>I like it live debugging.
Always my favorite. Makes me feel human.>>Oh, why is it pending? No. Got first.>>This happens
to the best of us.>>Yeah.>>It’s like 90 percent of my life of what is
happening to you right now.>>Oh great! It’s creating. Cool. It says it’s
pulling docker image.>>Come on docker.>>Yeah.>>What can you mean, yeah?>>Oh. It’s not doing a thing.>>Come on we can do it.>>Well, if you hear something like this might
happen. So in the meantime.>>I like.>>So this is like
a toy example that usually finishes training
in a couple of minutes.>>Yeah.>>But, of course like, if you let it go for hours you’re going to
get much cooler results. So for example some of
the things that you can do. So all of our experiments
require like a simulator.>>Yeah.>>So, we run like
the simulation on CPU. And then we run the machine
running on the GPU. And then our Kubernetes cluster usually has like mixed VM types so we have like a bunch of CPUs and bunch of GPUs and then, well sometimes they talk over the funnel network but sometimes that’s not
good enough performance. And then we go straight
to the host network. But anyway, so, that’s kind of a very basic set up and then you can start doing
things like this, where like-.>>That is so cool.>>Actually do
things. And dockers still doing docker things.>>Come on docker.
We’re counting on you.>>Jeanny prepared that song?>>Yeah no. So people want to find out more about this
where would they look?>>So most of our things are on opendev.com or check
out our GitHub. Oh look, it’s like
learning to walk right now. So it’s not really
good. It’s like.>>That’s what
Nick was walking,.>>Is that a video of me
going to work this morning?>>Yeah it-.>>Oh.>>Yeah. We didn’t
want to tell you.>>Yeah, weird.>>Yeah.>>He’s going to do that for like a few minutes
and then it’s like,>>And then, and then,
it’s going to learn.>>Yeah.>>It’s really cool.>>Yeah. Sometimes it just looks like, it’s like tiptoeing, its
like walking like this. Yeah. If you want to just like try running
some of these experiments. All of the code is already on our GitHub and we publish
like baseline algorithm, so you can just like
clone it and run. And we also talk a lot about our infrastructure
on our blog.>>I want this in
a gift so I can tweet it out being like me
going through life like-.>>Oh yeah, I know we have many. So they look like
they’re just super drunk.>>It might be a prom, when I went, you know that’s
what it looked like dancing.>>Oh you went to prom. Lucky.>>No I didn’t. My mom took me.>>Yeah. A typical classic set. His mom also drove him
here. Just so you know that.>>It’s true.>>Cool.>>All right. Very good.
That was really cool. That was much cooler than Nick. Sorry Nick and I’m kidding,
I love you Nick. You’re my favorite.
I mean Seth is my favorite. So we’re going to
go over to Seth.>>Let’s give him a hand though.
That’s a really-. AI is fun, right?
I love AI myself, it’s what I studied when I went to
online school at night.>>When was that?>>I also got my Jedi masters
certificate at the same time and became a lawyer.>>So, I’m pretty
excited here to have Christoph Wurm to talk
about a Elastic Stack. So, why don’t you give me
like the little five-minute, like what is a Elastic Stack? Why it’s important? And then what you’re going to
talk about it, today?>>Sure. Sure. Right.
So, Elastic Stack used to be known
as the ELK Stack. So, a lot of people
still know it as the ELK Stack and
that’s totally fine. And the centerpiece of
it is Elasticsearch, probably the most
well-known piece. And it’s kind of
interesting that the story behind Elasticsearch, the story goes that the creator
of Elasticsearch, Shay, was working in London while
his wife was studying to become a chef and he wanted to create
a recipe search engine for her. And so there was no good
like opensource tool available back then
to do this things so he started creating something and he called
it Elasticsearch. And then other people, so
it was a search engine, it was a full-text pure
opensource search engine and then he made it opensource, he put it in GitHub, that was
back in like 2010, I think. And then people
started using it, the community started
picking it up and one of the first bigger users of
it was actually GitHub. GitHub uses it for searching
through all the source code that is checked into all of the GitHub
repositories both on, on GitHub.com but then also
on all GitHub enterprise, deployments and then other
people picked it up as well. It’s used as the search engine
behind Wikipedia, behind all of eBay, and behind Microsoft’s MSN.com. So, it’s used as a search engine
for a lot of websites. And and that’s where
it all started. And then people found out that a search engine is actually also a pretty good place to
put your machine data, to put your logs,
and your metrics. A search engine is very fast, has new realtime reads, has very high volume writes. It’s easy to use as
API’s for everything and it also scales very well and scaling means both scaling
up and scaling down. So I can run it on
my laptop and in fact, I do or I can run it on a hundred machines
and then of course, I can have a lot more data in it and so search
a lot more data. But still all of
the things, all of things, I just said are still
true it’s still very fast, it’s still very real time and so people found out that well, you can actually also use it
for something like log data. And we had, with the community step up and Rashid
out in Arizona created a UI on top of
Elasticsearch called Kibana and Shay saw this and thought it was
really cool and reached out to Rashid and
asked him to come on board, by then he’d founded a company
around it and so Rashid agreed and came on board
and then later Jordan, down in the Bay Area created a tool for getting log data into
Elasticsearch called Logstash. And he opened sourced that and then Shay
and Rashid saw this and found this really
interesting and so they offered him to join
the company and he did. So, it’s kind of like what Ashley was talking at the very
beginning of today, it’s like writing
opensource actually creates you a job. Right?>>Yeah, you can make
your own job out of it.>>You can make
your own job, exactly right. So it’s like one of
the great hiring tool actually like we look at contributor’s and then we
just offer the contributors to join and that’s how
companies like ours grow. But, so that then
became ELK, right? Elasticsearch, Logstash, and Kibana, that’s what
the acronym stands for. And then we got
some other software in and so we dropped
the ELK acronym and renamed the whole thing in Elastic Stack
but basically what it’s being used especially
in the DevOps space is for ingesting logs and ingesting metrics from
just basically everything. Most of the things
that we’ve seen today whether it’s Kubernetes, whether it’s resources created using Terraform,
or anything else. All of these the servers, all of these containers,
all of these be VMs, all of them generate
a ton of data, a ton of logs, a ton of metrics, and you need some kind
of central place where you can put
all of this and then look at it and just
work one center place and that’s basically what the Elastics Stack is all about.>>I see, and so, I remember the ELK Stack, I’ve heard about
the ELK Stack, I didn’t know that it was the search, the logging and
the Kibana which is the UI over it and the interesting thing
is like now that, I think about it,
I usually logs, they are just these ginormous
files that you’re like, hey where do we put these and then the IT ladies
is like, “Hey, we’re running out of disk space, what we do with this stuff.” And so this is interesting
because my field is machine learning and
this is the kind of data that we use to create all sorts of machine learning algorithms and so it’s good. So, I would love to see
how this actually works. You have time to
do a demo for me?>>I think so, Yeah.
Let’s see that. So, what we see here is, we
see Kibana and that’s the UI, and this kind of preview so, it’s Version 6.0
which is going to be out in a few weeks but
you can see it here. And so, what we see here is, what it looks like once
you get the data in. So you have the data
in Elasticsearch, in the the data store
and then you have the UI on top here where the first
thing they see is, you just see you data,
you see the raw data. So, what we can see
here is we just have this nice histogram at
the top where we can see just the number of log
events that we have over time, the data that we
have in here is from our own website, elastic.co. This is like about six
weeks of traffic on this website about
14.5 million hits. And so we just
have all of them in here just standard web logs. We can look at the at the details of
one of those events if, I expand this and we can
see the different fields, the different fields
that are available here, things like the number of
bytes that was being sent. We also enrich the data so we have things like the city name and the continent that we
get from the IP address, we can look up the IP address
and the database and then we know where this IP address came from and then of course
all kinds of other things, the remote IP address
that was being used, the response code
that we sent back, the UL that was being used. And you can see it in this nice table form but what we can also do is we can also see
this in just the raw JSON. Everything in the Elastic Stack, in all of these products
everything has APIs, all the APIs are
Rest and JSON so it’s quite easy to
use as a developer.>>So if you’re, like the question
that, I have is when, I’m writing an application and, I want to use the Elastic Stack
to do all of this stuff, is there like a special, do I have to add
some extra code, or do I just use
my standard logging procedures by putting things out the files, It looks like using
NGINX here, in this case. Is there some
special sauce that I need to make this work?>>No, no, that’s
that’s the beauty of it. So, we have we have all these
tools available that makes it easy for you to use on
your existing data, right? You already have the log files and all applications or you log out something
usually there is like log for Java applications, other applications have
their own standards. So, there are already are log files and then
all you have to do is, you just have to configure
something to pick them up and then do
maybe a little bit of like enrichment
transformations and then get it stored
in Elasticsearch. But none of this requires code, all of this just requires,
just requires configuration.>>I see it, so as a developer, you just do whatever you do. Make sure you do
a lot of logging, just to make sure we
know what’s happening in your application and then
you can use any kind of, hey, let’s just pick these
up and put them here.>>Yeah, exactly. What’s important though
is like this concept of structured logging where a lot of log data to be
honest is just really, really unstructured,
is just a log line and sometimes that
can be problematic where if you have
some fields in there and then a field contains something like an unescape data structure, it’s almost impossible
to know like where one field ends and where
another field starts. The best thing to do is
actually to log something like JSON or maybe key value. But JSON is the best,
structured logging then it’s really
easy to get the data into anything really and to just do interesting
things with it. But anyway, so good
going with the demo here, so we can look at the raw data, but we can do a bit more so what we can
for example do is, if you want to see everybody who has
access to certain UL, we can filter down
on this UL and, I can just click here and
then it creates a filter, a global filter and
it just filters down the whole view on just requests
for this specific UL. And beyond that what,
I can also do is, I can have a search
bar at the top, like remember it’s
a search engine so it can do search and I can just enter something that
I want to search for. For example, if I want to
search for owner requests if, I want to see owner requests
from let’s say Seattle, I can just type in Seattle, hit enter and then
if your list is on, that as well and
then I can just see the requests from
Seattle for this UL. Another interesting thing that, I can look at is let’s pick just any one of these
requests at random. What I can do is I can look at what we call
surrounding documents. Basically, the idea here is that when you’re looking
at something specific, when you’re trying
to find something specific you might be
looking for something like an error message like somebody called
customer support, “Hey, I get this problem I get this error message
on the website. What’s happening there.”
And so the operator goes, the admin goes, you all
guys who are watching you go and and you look at your logs and you want to
find the error message. So you search for
the error message but then the error message itself
is just one thing. What you’re probably
want to look at is you want to
look at, “Okay, what happened immediately before we got this error message? What was the action? What was being done? Which user was
on which IP address? Which service did it hit?” And so we can see
here the context. We can see here
the the log messages that were immediately before and after the specific
message that we selected and what we can do here is if we
want to for example, filter down to
a specific IP address, we can also do that
and then we can see everything that
this IP address has been doing. So we can see that it requested a bunch of some blog posts
and a bunch of assets, images that were probably
linked to on the same page.>>This is cool because
as someone that’s monitoring my website
I can actually see the paths that people
go through my website. I can start to
test things and I’m assuming and I don’t
know if this is true that I could have
this data queried via some other program to do additional analytics on
top of that, is that right?>>Yeah. Yeah you can do that. Everything here is
very open system. You can just access all
of the data via API, the SQL language
behind it on JSON based and we have language plans available
for Java, for Python, for Ruby, for whatever you have and
you can just use that and just write your own
little scripts and applications and integrated with other things that
you already have to access the data that
is stored in here.>>That’s pretty impressive. So. Again like you said there’s no real work that I have to do other
than put out logs. And then what is
the structuring look like like? Let’s just say I’m
putting out like comma delimited files do I have to do some process that
changes it to JSON, to move it over or
will it read the CSVs?>>We can we can read the CSVs. So we have a bunch of
different input plug ins for things like
just standard log files. But also CSV files or XML files or we can also read
data from databases. If you have something in a relational database
and you want to pull it out of there and
get that into Elastic switch, no problem we can also do that.>>Fantastic. So is
this all of the logs, is there anything else
you wanted to show us?>>Oh yeah definitely.>>Let’s do it.>>So, this is just looking
at the raw data and it’s one thing that’s
really useful and that a lot of people
are dealing with it. But then the other thing
that you can do is you can also create
something more visually, you can create
visualizations and you can create dashboards. Let me just show how that works. So on the left side
here I can switch into visualize and then I can create, I have a bunch of
visualizations already in here but I can create
a real visualization. You have a bunch of standard
chart types available here, things like area charts or different pie charts,
line charts, pie charts, maps and
all kinds of things. I want to keep it
really simple here so let’s just create a pie chart. I mean it’s like that
select the date type. So I’m going to
select my log files. And one thing that for example will be useful to
look at is look at the proportion of
successful requests versus unsuccessful requests. Since this is
nginx Web log data, look at the HTTP response code
that we sent out. So what we can do here
is we can we can split this pie chart by
an aggregation and as an aggregation we specify a Terms aggregation
for a specific field. I believe it’s
called response code. Here we go, so let’s split
this by the response code and then very simple
what we see here is that luckily most of our responses
seem to be successful, over 75 percent are
just dumb HTTP 200. That’s great. But then
the next one is actually 404. So we source on our phones
that will be something interesting to look further into and we’ll do
that in a second. And then a bunch of other response codes that
we’re yet to use yet. What I can do now is when I’m unhappy
with this visualization, I could save it and then I could take several
visualizations and put them on the dashboard. And at least I have one of
the dashboards opened here. So our dashboard that I created here that shows us a bunch of different
things from the data. So instead of looking at a very specific thing
like what we see here at the top of the dashboard is just to aggregate the view so I can see that in total life
almost 50 million requests. I have 850,000
unique IP addresses, unique visitors on my website. I can see a map of where
my requests are coming from. I can see the pie chart that
I had I just created with the response codes and done here I get
more detailed information. I’m looking at things like which browsers and operating systems
people are using. So since it’s a public website, this log of webcrawler and vaults and things
like that on there so a lot of the operating system that we can’t really
make sense of that. But then we have a lot
of Windows 7 users, many of them are using Firefox. Some of them are using Chrome. Windows 10 also pretty popular, has actually more chrome than Firefox users different
from Windows 7. Some Mac users,
some Linux users. And we can see
the most common URLs. The most common research
that people are accessing. Not surprisingly it’s our icon. But also we can see
some really active APIs here. In fact it’s kind of interesting
that the most active API here has almost 500,000 requests I remember
it’s like 40 million, 40 million requests here, and that’s kind of alot
of requests there. And what we can
do here is we can look into what
this IP address is doing. So if I filter down on this then this whole dashboard
filtered down to just this one IP address. Apparently an IP address
somewhere or someone in France that
has been active for only some period of time that we have data for but
made a ton of requests. But then actually interestingly, it made almost all of
its requests against just brute. So I don’t know might be the world’s worst
webcrawler or something.>>Yeah. That’s impressive. As I am looking at this, and any time I see
you like a tool that has this kind of power, I always feel like we need
to step out, and level up. And how do I think
of it because I mean obviously there’s
so much I can do, what is some thought process
that needs to go into how I set this up properly?>>Yeah, yeah.>>Because I feel
like there’s got to be some overarching principles like some stages of thinking
about these things how would you verbalize that?>>Yeah I can give it a shot. So I feel like it’s
actually my job right. I talked to people
the whole day. I used to be a developer, everybody who talked today
and I used to write code the whole day and now I say like the whole day I just
communicate with people.>>Sure.>>And so I talked
a lot of people especially Don inthe
Bay Area about this and the way I think about
these projects is I think of them in three stages. The first stage is always
getting the data in, ingesting the data,
and it’s probably the stage that you can
most easily think about. It’s also the one that you
can most easily quantify, kind of measure by and
it’s something that a lot of project teams spend a lot of time at. How
do I get the data in? Now there’s two other stages. I think that
the second one will be making sense of the data
and it’s actually what I’ve been showing
here where I’m just looking at all of the raw data
doesn’t really make sense, you have 15 million requests
here and that’s just a demo in
a real world deployment you would have
a lot more data in there. So you need to somehow
reduce all of this. All of this data
that you have to something that is
meaningful to you. And so there’s two ways that I’ve shown
you of doing that. One is to filter down to
something specific that you’re looking for or to
filter for the city, for the specific URL. What it showed first and
that’s one way where we are using all these
different data points to just a few data points
that you care about. And then the other way which you can do is instead
of going down into the details you
can go up which is this dashboard where
you’re looking at an aggregated view of the data but no longer at
individual requests. The second step making sense of it and then the third step is doing
something with it. There’s usually two ways
that people do something with this insight
that they found. One is that they just
do something manually. You see something the data
like here we’ve seen this IP address and now we could decide what to do about it. Maybe having
500,000 requests for the same thing from one IP
address doesn’t really make sense so we might
decide to fix on a blanket or we might want
to further investigate. So that’s the the manual way. But then you could also do
something more automated. You could have
the system reach out to you for example in form of some alerts
like you might get an e-mail if something
like this happens, or something like
that and you might define thresholds
and things like that. So yes I think of
the three stages; getting the data
in, understanding the data, building
your searches, building your dashboards and then doing something
with it like building some alerts for
example or establishing some kind of workflow as
something of reacting to it.>>I see how you do
the understand part. The ingest part,
to me that still-, probably is there
some process is this part of the elastic cert or the elastic stack that does the import or
import of the data? Or is it something you have
to do external to that.>>Yeah yeah. Good question. So let me show
an architecture slide.>>Let’s do it.>>And we can look into that. So what I have here is I have only one slide with me here. And let’s see if this
advances. It does not. Here we go.>>There we go.>>Here we go.>>There we go, okay.>>The MacBook touch bar
actually saved me.>>Oh wow. There’s one good use
for it, right?>>There is, yeah. Advancing PowerPoint
slides when they’re-.>>When they’re frozen.>>When the track pad
doesn’t work. Right, so we see the kind of like an azure architecture
for deploying Elastic on here. So, on the left side, we can
see how the data gets in. Now, there’s two main ways
that you can get the data in; one is to use beats. Beats is a framework
that we have, a collection of
different data collectors so these are small little agents
just what an end goes, so don’t take up
a lot of resources. They would run on
the actual Azure machines on the either the virtual
machines or if you have containers in the containers
and they would collect data mostly log files, system metrics and
application metrics. But it could also be
things like network packets and you just collect these data
and just send it forward.>>I see.>>Another way of
getting the data in on Azure is we have
an integration with a bunch of different Azure
services with if you have some some files already in storage blobs then we can
get the data out of there. If you’re funneling
your events through something like a Service Bus or Azure Event Hub then we
can get it from there. Or as already mentioned, we can also just pull data
directly out of databases. So this is the out-of-the-box
way of getting data into into Elastic and beyond that actually we already
talked about the API is right. You can go out and do whatever you
want on your own right. Like if you if you
want to get your hands dirty and actually do some development then you
can of course also do that. And then what we have is you collect the data first
then the left part. You might have to
do some kind of enrichment for example
for like locks, you usually have
to do some kind of field extraction where you just have a big lock line that has multiple different fields
and has the time stamp, an IP address, username, and some other information that you want to
extract that into its own fields and so that
is something we can do. We can do in Logstash. And then we have Elasticsearch
as the data store. And we have Kibana
as the iron top and we’ve already seen that one.>>What about X-Pack?
What does that do?>>X-Pack is
the commercial extension that we have on top
of the open source. So unlike some other
open-source companies, what we do is we don’t have an enterprise edition of all of our open-source
components. We just have the
open-source components, and we just continue
to develop them and release
new versions of them. What we have
commercially, instead, we have commercial
extensions on top, so basically as
add-ons or plugins. The nice there is
you can just keep your open-source installation
as you set it up, and then if at some point
you see that you can get value out of the commercial
extension that we have, the things like security, and alerting and monitoring and reporting and in
machine learning and these kinds of
functionalities, then you can
install them on top, drop them right in
and you’re good to go.>>That’s awesome.
The other question I have is, with the volume of
data that’s coming in, is it real-time search or do you have to shoot off
a job and then things happen? How real time can this be?>>Yeah. It’s fairly real time. One of the reasons why I think this stack is so popular for especially log data and metrics is that you do need real-time access
to the log data, the metrics if
something is going haywire anywhere in
your IT landscape, or any of your applications
on your web servers, then you want to
be able to debug now and not wait for
some job to finish.>>Right.>>So even if you have
a ton of data in here and we have deployments of hundreds
of terabytes of data, even sometimes
petabytes of data, you can still expect
this real-time response time. You just have to scale up your data store Elasticsearch
methods here. So instead of just having it on your laptop on one instance, you would have dozens, maybe even hundreds
of instances, but then it’s
perfectly possible, it’s really designed to scale. That’s awesome. So you mentioned
that it works on Azure. How does one set that up? Because I mean this is a super interesting way of
looking at logs.>>Yeah, definitely.
So what we have is, and I’m going to
quickly show this here, is, I’m going to switch
back to my browser. So, aI have our Azure portal
here that we have, and this is something
that I would assume that every user
uses every day. And so to deploy Elastic, what you can do is
we have a template, we developed a template
together with Microsoft where it’s a deployment
template so it deploys a bunch of VMs
and then surrounding infrastructure to and
configures them with Elastic, deploys Elastic on them, Elastic Search and
Kibana UI, configures them and then you’re good to go.>>So it’s literally just
click template and it’s there.>>It’s basically like
that. If just click on New, and then you can search
the marketplace here. So let me switch for Elastic. If I switch to elastic, the first thing that
should show up is, there we go, the official
Elastic Stack template. So you just click on that, gives you a bit of
description here and then. At the bottom, you can click on Create and then it
takes you through this eight-step process
of configuring everything. I think I have it opened here in another tab where I’ve
already filled some things out, so you just configure things
like the username there and the password that you
want to use to be able to access the VMs that are
going to be deployed for you. You can configure how many
how notes you want to have. You can configure which
node size you want to use depending on how much data
you expect to put in there, and then a bunch
of other things, whether you want
to deploy the UI. And at the end, it gives
you this nice summary that summarizes
everything that you’ve configured and then you
just click on OK and you, after this, validated
this thing. There we go. And you just click on
OK now and if I do that, then it would now go and it would create all the machines, then create all of
the application gateways and everything that is necessary
to set this whole thing up and then essentially you just have a URL and IP address
that you can go to. And then you would see the UI that I was just showing
you. You would see Kibana. And then there’s no data
in it at the beginning, but then once you get the data
in, you’re good to go.>>That’s really cool. The other thing you mentioned, and it’s because I’m
a machine learning guy, you said something
about machine learning. Could you talk a little
bit more about that?>>Of course. So we have this this functionality that we
call Machine Learning now. What it does at the moment
is it does anomaly detection. When you have
log data or metrics, then all of this is
time series data. Everything has a time stamp
associated with it, and so there’s probably
some kind of pattern to it. Probably, your data is
not different every day, but you have just about
the same number of visitors on your website today that you had yesterday than
you had last week. So then, that means that you can do some very nice anomaly
detection on there. And I can actually show it.>>Let’s do it. I want to
see some machine robots here.>>There we go. So, I have the machine learning
tab here and we have a few jobs
already configured, but I’m going to
create a new job here. So, when I create a new job, I have a couple of options
for how to create it. Let’s keep it very
simple for now. Select my data set again. And then what I can configure here is it first
ask me what I want to look at. So this is unsupervised
machine running, so I don’t have to
train it or anything.>>Right.>>But I just point it towards the metric
that it should track. The easiest thing, what I always start with is just the count, just look at how many documents, how many events do we have here. And so if we do that, then
I should be seeing here, this is just an aggregation
of the raw data, what we’ve seen before already, just kind of like the
normal up and down. And now what we’re looking
for here is we’re looking for anomalies in this trend, in this time series. And so what it can do
here is I will give it some kind of a name. And then I will hit create job. And as I hit create job, what it will start doing, it will start looking
at the data as it has it and it starts creating
a model based on what it sees, based on what it observes. And so it goes through and very quickly through all of
the data that we have here. Remember, it’s about
six weeks worth of data, so 15 million events
something like that, and it starts
building this model and it was actually pretty fast. It’s finished, it
found least one anomaly so let’s look at the result pure.I mean actually go to
the very beginning first. So here we can see
the learning phase. At the beginning, when
it had just started, it just started
building the model, it doesn’t really know what
the pattern of the data, so it doesn’t know is
there daily period, is there weekly period? What are the normal
ups and downs? What are the normal values
for this time series? So it just kind of is very
naive at the beginning, it just assumes an
upper and lower bound, and this kind of seems
to work for some time, and then after a bit of time, it starts getting smarter, we
can see how it got smarter, how it recognized
that, apparently, there actually is
a daily period in here. Probably since it’s a website, we’re going to have more traffic
during the day time and less traffic during
the nighttime just based on human psychology, or just having to sleep
at night unfortunately. And so, the model
becomes more precise. And then at some point,
it found an anomaly here where we usually have, this seems like daytime, the number of
users increases and then suddenly it drops on hard. So we can see here at the bottom is the actual anomaly that it has found,
details about it. So we found anomaly on February
27 and we were expecting about 4600 requests
during this time period. I think it’s about
a 15-minute time period here, and this would have been normal,this would
have been expected. But we only actually
have 281, so something>>Something broke.>>Something went wrong, it’s 16 times slower. And in fact, some people
might remember this. February 27 was the day when back when
AWS 3 had an outage, like the whole thing
went down and our website was actually tied to this in some weird way in the back end. And so, as many other people
as well, we had a problem.>>So that’s really cool because it takes you from
getting the data in, from actually
visualizing and then to actually getting
some intelligence over that data in
order to act properly. That’s awesome. It
turns out we also have a customer
here that uses it. Anshul Kumar, won’t you
come over here, bud, and have a seat and tell us a little about
what your company does and as well as how
you use the Elastic Stack.>>Thanks for having me here, so i work for
McKesson corporation. Next line or touch bar.>>Nice.>>That’s true. And
I’m guessing it’s not many people know
the name but it’s the biggest
healthcare company in the United States and
more than 185 years old. And I work for the medical-surgical
business unit and we have a quick map there. Those are our
distribution centers that belong to my business unit. And our customers,
our physician offices, surgery centers, we are
pretty much the middlemen. The next time you see
a McKesson van driving around, that’s probably from one of our distribution centers and that’s our
own private place.>>So, Elastic stack. I was talking to Christoph
earlier and he’s like, what can you tell us about
Elastic stack architecture? And I had around five
minutes to do it. So assuming engineers are smart people I threw
together this word Cloud. Hopefully, I’ll try to
speak to it and people can hopefully search on it
and get more details. Well, the first thing was
bring your own license. That we have on
the Azure ecosystem, you have pass
service Azure search. We did not do
Azure search because our used cases mostly
about aggregate functions, and Azure search does not
give aggregate functions. So that’s where we had to
build it as an IAS Service. And as soon as you
get into an IAS model, then if you recall from
a couple of minutes back, the Azure like the template, like the [inaudible] template
that Christoph showed to us. So that was our starting point. We deployed the first set of Elastic clusters
through that template. Next thing we found out is
building Elastic on-prem is very different than building
Elastic stack on VNets. Because Azure security as development group or
the infrastructure team, we don’t have access to read the private VNet on the cloud that’s done
by a separate team. So it’s imperative that
we’d know about our subnets and people who are in Azure
they know subnet is a range. So if you think your
Elastic stack cluster you want it to be thousand nodes
to hundred nodes. Make sure you plan that subnet
and ask the team to build your subnet with
that kind of IP space.>>I see.>>And on speaking about the IP. Another thing is
the security is pretty rigid. So you need to know your ports. On the Elastic stack, 9200 to 9300, that’s
the basic requirement. But we got burned on that. That’s what we assume
we built on that. Later on, we found out that
the app team uses port 9500. So, it’s good to
start a conversation with your own
internal consumers. What ports are they going
to hit the cluster with, know it in advance
than later on. And one other thing
that came up as we being in healthcare security
is at most important.>>Of course.>>And I think traditionally, our data centers are
in complete lock down. So no one gets in and no one
gets out, so it’s so uneasy. But on the public cloud, we had to make sure that we’re only selectively opening up ports and even with them the Azure space
we have multiple VNets. Why is one VNet
talking to another one? So on that side, we had to do plenty of research, and we’re big Hadoop shop, so we push data into
Elastic through Hadoop.>>I see.>>The ES-Hadoop study. And we specifically opened
only those IP addresses. Another thing that
which I can pretty much guarantee as soon as you move
from on-prem to an Azure, and you try to run Hadoop jobs
it’s going to fail. And that’s highlighted
red property.>>Of course.>>Make sure you add
that es.nodes.wan.only. What it’s doing is, it’s making
an entry on the ES nodes, and it’s specifically telling the Elastic stack traffic
and Hadoop traffic to hit only these nodes so it’s not searching for Elastic awareness
or across Azure. The other problem we found out was
the existing Git template. In the past, more long Azure you can scale up and scale down. And you can, when I say scaling, it’s a vertical and
it’s horizontal. Vertical is going to
be bigger VM machine.>>Right.>>Horizontally is going
to be more VM machines. And we don’t have that functionality
in the current world. And at the same time,
if you are live, you’re running
a production cluster. People have tied their jobs to the load balancer,
the static IPs. You cannot flip it on the fly. So it’s imperative that you
keep the same IP static. The only way we were
able to find out to do this was selectively
take down the machine, take out one of
the machines and use the Azure Active Directory and the Azure space to
insert one machine at a time. So hopefully, it has
the same host name and the same IP address.>>I see.>>And one other problem on security is encryption at rest. So encrypting
these machines takes, these are bigger machines. Don’t be surprised
if you’re taking one full day, 24 hours. And just getting
the OS part encrypted. The data, it might
run for three days. So try to put that buffer as you insert
more machines and set up those expectations and the
managed cloud provider specs. So because we are
a bigger enterprise, so we outsource
our management of our IS assets on Azure
and we went for Rackspace. And earlier on, we thought encryption is encryption
so we ended up doing BEK. It turns out, that’s
not supported because the Rackspace backups were
failing the only support KEK. So that’s another thing to set of expectations with
the cloud providers. Like I said, each encryption three days better know
it in advance than we do.>>Right.>>And we do use
the Azure storage, blob storage, that’s for disaster recovery
and geo-replication. So in case of BR, we can flip all the data and build a brand new
cluster on the fly. And we did drill that takes around 16 minutes for
us to go live on that.>>Wow. So you actually
like ran through the failure and turning
over and it took 16 minutes?>>We tried to mimic it, yeah.>>That’s amazing. I mean, this probably, this little five-minute thing
that you did is just like, look here’s some tips
because we’ve used this, we’ve used the Elastic
stack goodness. These tips and tricks you
should watch out for those. These are amazing. Any
more you have in closing?>>I want to do some fun stuff. The next one that I had was-.>>Let’s do it.>>So traditionally, I come from data
warehousing background, and the traditional approach on doing research or
analytics is to first get the data somehow loaded into
a relational database, and then try to
kick off the joins and write all that stuff to make sense of the data
how it connects together. And this became a challenge. Usually, it’s good with
the little amount of data but for projects like
the customer 360, where the data has
not been looked out by anyone that is the very first time someone’s looking at it. I started running
into issues with the unstructured data support
does not exist. And the second thing is
the field limitation. It can only take
thousand characters and so on. Then it occurred to
me how about I threw everything into
Elastic index using Hadoop. I just separate
indices, super simple, just load the data as it
is, do not manipulate it. And towards the end, just drag drop each one of the indices into one dashboard. We just drag drop it. So that’s what we did.
In the screenshot, most of it is masked but
if you look closer into it, the first one is coming from our IVR system that’s our
call center support calls. And there’s an NLP behind the scene. I won’t go into that. The next one is from
our salesforce.com engine and we’re also using the Net Promoter Score here in our e-commerce website. So essentially, with
minimal development effort, I think six hours of coding
that’s what is required. I only had a BI visual dashboard which I was able to send
to my product owner. And he says, “Okay, how do I search it?” I’m like, “If you can type in the text box,
you can search it.” So, there are no joins
needed, nothing. Well, this was a good
slingshot approach we got.>>And this is impressing
because it’s coming from multiple data sources that are not traditionally
in the same place.>>And the best thing
was traditionally, IT builds a product, rolls it out, then
business comes back to find out how can
you validate it. It’s doing the right thing.>>Right.>>So, there’s a second
portion to the screen where we are exposing
the complete indices. If business had a question, they can look at it because
we do not throw anything out. So Elastic calls it, what I’m going to call records. Elastic call it document. The whole document is available. They can open it, click it, the entire e-mail. And
that’s why it showing up.>>That’s amazing. So, a question coming
in from the audience. Are there any
limitations on joins, any guidance on making
search queries not expensive from
a compute perspective?>>Yes, so joins is
an interesting topic. The question comes up a lot like does the Elastic do joins. And I know like as an architect, I should never say no,
but it’s kind of like no. The reason we don’t like do like things like arbitrary joins that something like
secret database does is that, as I said, we can run on
like hundred of nodes. And doing joins, arbitrary joins across these hundreds of
nodes just as a scale. So, there’s different things
that you can do. One, we can do like
some kind of pre-join. So what you can do, is you
can say when you put in two documents, or
more documents, some set of documents
into Elastic search, you can specify
a relationship there, you can act like
a parent-child relationships, and then we make
sure that we store these documents all
on the same machine. And then, we can
actually do like in memory joins
so then it works. So that’s specifying it before alternative like actually what many people are doing like, they’re not even doing that
like what you can often do is you can just de-normalize the data
before you get it in.>>Right.>>Something like
a lock message like the weblocks that
we’ve seen like an app here this is probably
not just going to be a once in all your
weblocks but several times. But we just stored
multiple times, like we were in
distributed system, we can store data, and we
can store a lot of data. And then of course, search is really, really fast. So, it’s different ways
of dealing with this but people
usually find a way.>>So, no but yes. No but
yes, you can do joins.>>No but yes, exactly.>>No but yes, yeah. To do
a little processing time. Well, anything else or where could people
go to find out more? Anything else you want
to say about this?>>Sure. I think
the easiest way to like go is just go to
a website elastic.co, and we have a ton of
content there, right? We have live documentation
for all of these tools, we’ve run at least like one webinar a week
oftentimes more. We have a very, very active blog which describes
all of the features, everything that
I’ve shown here is something that you can look up there and
you can look into. And if you ever like feel that you would like
some additional help, I’d like, just let us know
and we’re happy to help.>>Awesome. Well, thanks
so much for spending some time with us
Christoph and Anshul. This has been
amazing. This has been a really good Azure OpenDev. This is unfortunately
the last session. Just a couple of
things to call out all of the sessions will be
recorded and available on demand along with getting started tutorials at
[email protected]/opendev. So make sure you do that. Continue the conversation
with our speakers on Twitter using the #AzureOpenDev. And finally if
you’re in Seattle, join us for the Azure
OpenDev after party at Hard R- I can’t
even read, man. Hard Rock Cafe at 6 p.m. go to aka.ms/opendevafterparty
to register. It’s been a joy being
with all of you. Thanks so much for watching. And hopefully, we’ll see you at the next Microsoft
Azure OpenDev. See you.

Leave a Reply

Your email address will not be published. Required fields are marked *