Frequently Asked Questions (FAQ)

DoD Enterprise DevSecOps Initiative

How many DoD programs are currently using Platform One?
The number is continuously evolving but we have about 42 pathfinders across DoD working with us in various ways. We also know our hardened containers are used across the U.S Government and even commercial organizations.

What percentage of applications are written in Java, .NET or another language?
It is impossible for us to tell but our new greenfield applications will use modern programming languages whenever possible, including Java, Python, Go etc.

DoD Centralized Artifacts Repository – DCAR

How broad is the internal adoption of vendor products from the DCAR?
Very wide as it is a very streamline process to get software accredited DoD wide. We are actively working with about 40+ companies/products at the time.

What is the communication policy to alert vendors of changes made to the SCSS, DCAR or Platform One programs?
We provide information usually within a week on the software.af.mil website and do bi-weekly Ask Me Anything sessions where we share our updates.

Is there an SLA for vendors to respond to changes?
Ideally within 30 days after change is announced. That being said, it is critical that vendors automate the push of their software container updates and dependencies in real-time with as little delay as possible.

Can we get access to a pre-production environment (including the deployment pipeline with security scanning gates) in order to test and validate our application?
Not at this time, you must setup your own environment using the same scanner that we are using to be able to proactively let us know of a new CVE.

To what extent does a vendor application have control over the auto-scaling capabilities of the underlying Kubernetes platform?
You provide the Helm charts or Kubernetes Operators or Kubernetes manifest with your application so you can certainly push for the right configuration settings there but they can certainly also be customized by each DoD Program.

When we release a new version of our application to DCAR, how do the deployed instances get updated?
Pathfinders will automatically download DCAR container updates (assuming they are connected) every day, up to twice a day. Then each DoD program can decide if they automatically use “latest” tag in their deployments or not.

How frequently can vendors update their containers within DCAR?
As much as you want. We certainly do NOT want to be behind your commercial releases.

We were told that all dependencies must run in a container, thus do we need to provide additional containers when they do not already exist in DCAR?
That is correct, if a container doesn’t exist in DCAR, you must bring it with you. Same for ANY dependency. You must assume the container build will happen offline. If you know the container is part of our list of 170+ containers we will be supporting, reach out to our team to ask if it is being done and if not, you can provide it to the team and they will take over the ongoing maintenance of the container.

If a container needs storage, how should we create it?
As long as you use Kubernetes native APIs to provision the storage in YAML, it should work for DoD. You can certainly use PVC etc.

Is there a list or can you speak to how the containers in DCAR address continuous ATO monitoring and maybe how the sidecar security containers work to serve cATO as well?
Please have a look at our AMA from Dec 13 at:

How do you advise vendors protect Intellectual Property (IP) and control access to software products while being hosted in DCAR?
We do not ask for the source code of the application inside of the containers, so if that IP is not open source, they do not have to give us a source code, they can just give us the binaries. They do have to give us the source code for the container the Dockerfile files, so we can rebuild them. That should protect the IP assuming you’re protecting your binaries. They will have to have some sort of key management. So no one can use the container without a license. As long as you give us the binaries, and you give us the source code for the Dockerfile file, we rebuild it, we scan it, we sign it and then you are good to go. I don’t see how that could put the IP at risk.

Cloud One / Platform One

What does IL5/IL6 mean?
IL2 is public information; IL4 is PII, PHI, CUI, controlled and classified information; IL5 is national security information and class ex. Source code for a weapons system or anything touching national security; IL6 is secret data.

Are there any plans to centralize funding and provisioning of the DSOP tools or licenses?
We have a little bit of centralized funding now to make these things happen. We will be able to know more and more thanks to the basic ordering agreement what we buy which will potentially help us next year to buy in bulk and be able to provide this at a broader scale. It’s not going to be a 2020 thing, it’s going to be 2021 unfortunately. But yes, we do have a plan to start providing broader access to licenses and buy in bulk. We also will be able to bring a manage service offering by June 2020 on Platform One with chats. People will be able to use that, they still have to pay the licenses, but it’s going to be way cheaper because we are going to buy in bulk. Pay per use thing.

Will use of the DevSecOps Access and/or Internet Facing Cloud VPN be acceptable for cloud applications that are accessed outside of AF/DoD?
The DevSecOps access point is internet facing designed to be accessed from anywhere.

What is the difference between Platform One and LevelUp?
Level Up is Platform One, it’s the same thing. We merged the Space Camp guys with the Level Up team and created this Platform One. We are not building a custom Government Cloud. This is all coming from commercial companies with Azure Government and AWS GovCloud.

How do you see interfaces within Cloud One and going out of Cloud One evolving? Are APIs the way to go?
Yes, for sure. API’s and abstraction is critical. We certainly don’t want to have issues when it comes to connectivity and getting locked into a provider or product. Anything you build should have a layer of abstraction. You always need to keep that in mind when you select products or API. Is that API scheme open source or is it locked in. We will always want to have that layer of abstraction and use an API.

On the idea of Hybrid Cloud solutions, do you see future programs being able leverage tools/services from multiple GovCloud providers (i.e. Azure AND AWS) rather than just choosing one? Are there tools/services in Platform One already to help achieve this?
Yes.
We actually do have that today. In fact, when you have an identity on Cloud One, identities are centralized so we are decoupled from the Cloud specific roles, so we can have access to both and our portal and single sign on can now tap and connect to both Azure and GovCloud. We do that today on Cloud One that’s how we do a single sign on identities across both clouds and that can be done at the application layer. This is very important, that’s why Kubernetes brings a ton of value with multi cluster. We can actually do that across environments with no problem.

Does Platform One manage tenancy with Azure & AWS internally?
Yes, we do. Based on your need, we can have different models.

Does Platform One manage tenancy with AWS & Azure internally?
We have IaC/CaC for different environments and Kubernetes distributions. Right now most are using AWS GovCloud, but we are also working Azure, vSphere and other options. For our multi-tenant environment, we have templates for our CI/CD pipelines. If a change needs to be made, our DevSecOps team makes a new IaC or CaC. If we have done it for any other team previously it is “push button”.

Are Platform One/Level Up cost estimates available?
Costs are tough because it’s not a one size fits all. It’s based on your compute, storage, the environment classification. Are you going to be in a classified environment or not etc. It’s not a price list. They will work with you to define your requirements and tell you an estimate of the cost. But again it depends on the number of users, the licenses you are going to need, how many people do you need support for your ops, your day two ongoing management of your Kubernetes cluster. It’s a very flexible model a la carte, where you get to pick what you want to use. Cost can go from free if you run it yourself to a few million to run it in a classified environment. You will certainly end up saving money by definition. There’s no way anyone could do it any cheaper because this is a Government team we aren’t here making money. We benefit from the scale and from access to things like the DevSecOps access point that no one has access to that makes it unique. Cost is very fair and very streamlined for you to succeed.

Are the developers provided by Platform One limited to Platform One environment work or can they work on actual software development ?
Platform One does not provide development teams. They provide a contract for you to get access to developers. We only help you build the platform. We are here to help the existing team get the job done and train them to get the job done. We can help bring more people if need be, but they are not part of the Platform One team. They are contracted through Platform One DevSecOps contracts.

All programs do queries and reports. Do you foresee a standard ad hoc query tool and/or data analytics tools as services in cloud one?
Yes, we are building quite a few options when it comes to simple data layers and we are going to bring the basics. I want to bring Pub/Sub to these events to be listening to say I need to do a report or take action on that event and save it somewhere and bring a data layer on top of that with different options probably 5 or 6 options. That’s not until 2021. Right now each program will have to build their own stuff due to priorities and budget.

Continuous Authority to Operate

How long does it take to get access to an environment on Cloud One or Platform One with c-ATO?
It depends of the complexity of the environment and if the containers for the tools you need already exist but, if you’re not highly opinionated on tools, we can usually have an environment up within a couple of weeks.

Since we can benefit from the c-ATO from Platform One, will we still need our own ATO on top of this environment?
It depends; if you deploy containers and use Kubernetes in production, you will benefit from full reciprocity to run your application. If the application is not containerize and run on legacy environments, you will reuse the ATO of that environment or you might need to have a dedicated ATO for that environment. We will work with you and your Authorizing Official to define the best path forward.

What kind of documentation is being used to guide the cATO efforts?
Piggybacking on the existing understanding of what we have done but it’s still going to be new, exciting content coming to you within a couple of months

For development purposes, are there specific versions of Linux that would be considered “more secure” and more likely to pass an ATO/RMF process? For instance, Alpine Linux?
We use UBI, which is a Universal Based Image it’s a RHEL based image so we have both RHEL 7 & 8. But UBI does not require a license. So, when you use UBI you benefit from not having to pay a license, but yet get the supply chain trust of RHEL. We do not recommend using Alpine.

What documentation (PO&AM, CMP, Config Mgmt, etc.) is required to achieve a CTF for a containerized software product on Platform One?
We have a very well established continued hardening guide, which I do believe needs a little bit of an update. That’s what we are going to do next month where we show what we want to see for the CTF. Right now it will have the findings and the mitigation of the findings and how we went through scanning the container and that’s what we do on DCAR where we bring the scanning results and we sign it.

DevSecOps DoD Questions

Why is the DevSecOps initiative pushing teams to use containers and microservices?
A modular architecture allows for decouple teams and flexible/elastic use of technologies. Refer to our training and slides on containers and Kubernetes.

Can containers and microservices be used for weapon systems or real time systems?
Yes. We can patch Kubernetes and the Linux Kernel to run as a real-time system. Microservices can be used for any system, including weapons. In fact, it was used in our demonstration of putting Kubernetes on F-16 jets on legacy hardware!

Where can I find more info on the F-16 implementation?
Send Rob an email rob.slaughter@afwerx.af.mil
There is group called SoniKube @ Hill Air Force Base working this

Are there discussions with USAF/TE about how to handle T&E in a DevSecOps world?
Yes. We are talking to multiple teams between the testing communities at the Air Force level, the OSD, to the nuclear safety organization. So, really every aspect of the testing community and we want to shift them to the left and bring their people into the teams. Go into integration testing, end to end testing and really automate that. Starting with F-16 to be able to make code change and push it to the jet with complete streamlined automation.

What is the latest status of the acquisition changes what will enable faster transitions to Agile and DevOps?
Check out the latest publication from A&S that is pushing this new software track.

Are there any community collaboration channels such as a slack channel where DoD and DIB community members can collaborate?
We want to be able to bring the chat with us we don’t want to be locked into a SaaS. We don’t use slack because we want to be able to have the same chat experience among all classification levels. If you are part of Platform One and you are working on a program you can get access to the Mattermost chat for both contractors and government employees. Anyone can access that, but you have to be part of a Platform One pathfinder. It’s not accessible yet to everybody. That might happen pretty quickly in June or July, but right now you have to be part of the Platform One pathfinders. We don’t use slack, we use Mattermost.

Has the Air Force considered security as a service as they have considered EITaaS?
We are creating as far as the DevSecOps stack pretty much what you would call a cyber stack as a service. We are going to pick different options, but the list is not final. We are looking at different products. That will be a managed service through us so people can consume that. We can centrally scan and scan open source tools and not have to scan 50 million times the same things.

When software that could be developed in your environment is deployed on a drone, plane or in a phone as an app, how are these apps hardened? What are you doing to prevent reverse engineering by an adversary? ie. an iPhone can be lost and a drone can be shot down behind enemy lines. These are all unprotected endpoints. Lastly, when you need encryption, what type are you using? Do you see the need for more?
Obviously each software can use different kinds of tools. We use various options now on containers. OCI compliance just released the preferred way to encrypt containers. That could be one option. But, again, if the key is on the system can bad actors still get in. Of course, based on the hardware, we have encryption at rest on the hardware as well. There is always layers of security there. But, if you have idea on encryption, we do want to stick to OCI compliant containers. So, we don’t want to create a custom concept of encryption. If you look at the new latest as of a month ago, they released a new way to encrypt containers and have the signed aspect as well.

Do we have training and resources available for embedded, in particularly a small microcontroller with under 1MB of Flash, and either bare metal or a very lightweight real time OS?
We don’t have a specific training yet. We are particularly paying attention to that kind of problem and will be working on bringing new content.

How will this work with weapon system OFPs and multiple contractors/government software teams working together?
It’s not a one size fits all answer. The long term vision here is to be able to bring you a government furnished DevSecOps environment where we will provide that to all the contractors as a central place to work. It will not simply be any more each vendor using their own environment to build software and test it and throw it over the fence during integration. That is not where we want to be. So, clearly for us it is all about being able to bring continuous ATO on multiple environments and multiple classification levels so the day they get the award they can go to work and start writing code and not have to spend a year building an environment. You should not be throwing it over the fence it should be centralized per program and sometimes ideally cross programs, but we are going to be flexible. There is going to be multiple different use cases, we are going to let the teams pick the tools. But the key is that you use same DevSecOps platform pipeline for that team, so there aren’t 20 different teams using different tools and different integration and it ends up as a disaster because of the drift between environments. It doesn’t matter what software they are building. It’s all the same.

Will there ever be a pathway for squadron-level development? For example, putting dev tools on NIPR?
We aren’t going to put them on NIPR. We will make them accessible from NIPR to the Cloud. We are going to host it on Cloud One and you will have access. We will ideally fix the problems at the base so you can access the Cloud. Many bases are deploying with EITaaS a faster, modern connectivity to internet so you can connect to the Cloud in a normal fashion like at home which will be great. If you are on a specific base, let us know. We have dates where that is going to happen and there’s a whole plan on that. Again, any team can access Cloud One to do development there at any time.

Will you help the AF network stop basing security on IP’s and change to domain based security? Many of our interface partners now in C1 don’t want to give us fixed IP’s for our AF PPS’s.
Yes, if you have case right now where you are having this issue we are fixing that. We are moving to a zero trust mindset, so it is not going to be driven by IP’s that’s for sure.

A great deal of Air Force programs are Mission Critical with very tight performance requirements and very strict change management. How do you see DevSecOps fitting into, say a Ballistic Missile System program?
There’s always a lot of predefined requirements that doesn’t enable the teams to adopt DevSecOps. So, that’s where we want to go back and help the teams understand the value of rapid prototyping, rapid delivery of capabilities and fast learning. We are partnering with the largest weapons systems right now moving to the DevSecOps mindset. We have proven that it can be done for a weapon system. The key is to get the buy-in and get the culture change. Once people understand and truly get Agile, it has nothing to do with DevSecOps, it’s the basic principle of Agile they will understand the value there as well. If you have a program that is still there and are willing to have the discussion with us that’s what we do. We onboard new programs, we educate them and train them on moving to DevSecOps. It’s a journey, it’s not going to happen overnight. Never hesitate to push them to us and get the brief on DevSecOps and the value they get.

For existing programs with existing hardware and infrastructure that are not ready to jump to a cloud solution, will there be some degree of reciprocity / trust when using containers from DCAR?
Yes, as long as you use Kubernetes environment and you are compliant with the reference design and you use the containers from the DCAR. This is designed to be modular at every layer and be able to accredit the layer or your infrastructure with your existing ATO and inherent of all that, add our stuff on top and then benefit from the continuous aspect on top of that. That’s how we are going to be able to do the continuous ATO layered approach.

Have you implemented DevSecOps across mixed clearance teams all working on a single product delivery? Example – 3 teams cleared 1 team not cleared if so, what are the impediments you have encountered given mixed clearance levels?
Yes, all of our teams are working across all classification levels. The big issue now is still the Diode/CDS. We have done a pilot where we can push containers from the low side to the high side with containers on Amazon but it’s not fully accredited just yet. So, we have to work on that. When we can automate that process then you have a streamlined DevSecOps pipeline across the environment, across domain. The CDS aspect is the hardest piece, but if you move to microservices, you have teams that decouple the classified data or the classified micro service from the rest of the code it’s actually pretty easy to have layers of classification on top of the other containers. So, you can do as much as you can unclass. Always do as much as you can on the lowest classification. You can build the automation layer then reuse it on the classified side and no drift between environments.

Does the program have any referencable uses cases where other DoD Services Security teams and Authorization Officials (AOs) have fully granted reciprocity to all the components they’re consuming from the Cloud One and/or Platform One environments? If not fully reciprocity are you seeing trends in what they will generally accept and what they’re still requiring their security and accreditation teams to test and document themselves?
So far, when we get involved with AO’s and their teams we have no issues because we are pushing next generation cybersecurity best practices and people get pretty excited when they see the stack we bring, I’ve not had a single issue with reciprocity. With the new guidance we are going to write we are going to have the full DSAWG approving our architecture and so it’s really going to be a given that the reciprocity is going to be preserved. I’m not really worried about that at all. If you find a team that has issues with the stuff we are doing we often brief AO’s.

DevSecOps Generic Questions

In conjunction with Kubernetes, can you also use ROOK and CEPH for Persistent Storage in Platform One?
Yes, you can use whatever container run time or container storage stack you want as long as it is compliant with the API requirements. Very flexible. The key is to automate it.

How does cyber initiatives affect this initiative? For example, having to have continuous logging, and continuous monitoring e.g. Tenable?
We have that baked in to the DevSecOps platform so you do not have to use some of the old ways of doing things. You will be compliant if you use the Platform One or the code we have to use DevSecOps platform. You still need to use ACAS and everything that’s not running inside of Kubernetes.

Has your team investigated Open Policy Agent (OPA) with service mesh to globally enforce data access permissions?
OPA is being used on Platform One. This is how we are going to do enforcement of Kubernetes settings and other policies on Kubernetes on the SRG.

How are you planning to handle low code development (e.g.. ServiceNow) in DevSecOps?
I actually don’t think the two are compatible. I’m not a fan of low code when it comes to building software. I think it’s a great use case for simple business use cases where you know exactly how complex you are going to have to be in terms of customization. My fear with low code is you are locked into the product you pick and very quickly you are going to have limitations as far as what you can do. If you need a lot of customization, it’s going to become very complex to do that. So, I don’t recommend using low code for anything that’s complex. I don’t think you can do DevSecOps mostly because these tools are not mapping or tying back to the DevSecOps tools that you know. They usually do not have a CI/CD pipeline, they do not have automated separate tests. It’s often part of the same package coming from that company offering the low code product. I’m not a fan of it and I certainly don’t see it as a DevSecOps us case.

Serverless using Knative has been a big topic, what use cases do you envision leveraging Serverless services for?
I think you can do a lot if not everything with serverless. Of course, there’s use cases particularly when it comes to real time that won’t apply. You can do quite a bit using events. In particular when it comes to AI, Machine Learning and scale of applications across the world when you are going to need to have reputation and listen to events and be able to react to those events across multiple applications.

Is there any appetite for a competition on operating systems to ensure secure, open source solutions?
I’m always eager to have options. It’s important to me to have options when picking operating systems. As long as it is going to be compatible with Kubernetes, I would say yes. If it’s going to disrupt that, I would say that’s not going to happen right now. Know that touching operating systems in DoD is a massive undertaking.

Is there any appetite for a competition on operating systems to ensure secure, open source solutions?
I’m always eager to have options. It’s important to me to have options when picking operating systems. As long as it is going to be compatible with Kubernetes, I would say yes. If it’s going to disrupt that, I would say that’s not going to happen right now. Know that touching operating systems in DoD is a massive undertaking.

How frequently are containers/pods recycled? Are they recycled every 4 hours? Is there a default policy that can be changed/are customers in control of this?
Yes, we do have the option, but it’s not mandated. We do have the option to kill containers every 4 hours and restart it and make it in rolling update where there’s no downtime. You can disable it for stateful containers. If that’s something you’re afraid of don’t worry, we don’t have to do it. It’s just the best practice for moving target defense. The more we do it the more we can be resilient when it comes to security.

Service Mesh is ideal for containerized applications, what do you plan on implementing for monolithic applications?
Ideally I don’t want us to build greenfield monolithic applications anymore. For new code of course should be decoupled things. It could still end up being a single binary at some point during integration, but ideally you want to be in a microservice environment. The key is decoupling things and being able to swap things and use these lego blocks, not reinvent the wheel, whether it’s a sensor to get that as a mircroservice so people can reuse it and a separate microservice on how you use the data for your mission. That should be a separate thing. So, ideally we should not be building monolithic application for the new code.

Our development and production environment are on-premises infrastructure. What’s my first step to investigate moving to Platform One/Cloud One?
First, you have to be ready to move to a DevSecOps culture. It’s not just we are going to do lift and shift your stack to the cloud. We don’t have to move you to the cloud if you want to stay on premise. I do recommend you being on the Cloud. We don’t mandate anything about being on Clouds. The key is that you are willing to move to the DevSecOps mindset and that your team is willing to make that change and of course you have to be a federal employee to initiate that change. Listen to my AMA session on Dec. 13 and AMA weekly sessions from Platform One. If you feel it’s aligned with your needs, then access what application you want to move and help you define what the minimal viable product will be. You always want to have timelines and deadlines. You don’t want to have a 6 month thing, you want a 60 day thing.

Based on the excellent documentation and previous AMA slides it seems that the redhat UBI is the preferred platform for vendors to host their containerized applications on. Would Ubuntu 16.04 (following DISA STIG’d) have the same reciprocity across the DoD- or should we as a vendor focus efforts on UBI?
We have not received an Ubuntu STIG’d image on the Repo and I’m not funding that because we don’t have the time to fund a separate OS. Because then we have to rebase all the containers to Ubuntu and it’s too much work. If a vendor has a product on Ubuntu and they want to do the STIG and give us the containers we’ll take it. We’ll put it on the repo. I have yet to see that done. Many end up using UVI because of the ease of the automation of the STIG. If you a vendor I would recommend using UVI, it’s free, there’s no cost impact.

You don’t consider Static Code Analysis / Dynamic Code Analysis to be DevSecOps?
No, I don’t. I think that’s basic DevOps. DevSecOps is next gen cybersecurity concepts. I’m not saying you should not do it. It should be part of the existing stack on DevOps. I am pushing continuous monitoring and the Sec piece of DevSecOps with behavior detection and zero trust.

Is OSCAL being considered around the Continuous Security Ingredients?
Yes, OSCAL is being considered but we are still waiting for OSCAL to be a tangible technical capability.

Does the managed DevSecOps stack create the CI/CD in any cloud or in Cloud One? Can you please explain the fact that it is multitenant environment?
We have both options. So, we have an ability to have DevSecOps environment on Cloud One and we have on other Clouds. It’s going to be a push button by June 2020. We are also hosting a version of that highly opinionated, less options that we picked that you can use to simply run your pipelines. So, we will have what we call the managed DevSecOps environment that’s inside of our enclave that you use to be able to use a pipeline. You will not be able to swap tools, so it will be whatever we picked. But, you won’t have to manage it and run it. Less flexibility, but it’s already ready to go. Or you can go and ask Platform One to build your own enclave with your own VPC instead of your Cloud or inside of Cloud One. That will be you picking the tools and paying the licenses of the tools. So there are both models, either multi-tenant model or dedicated enclave. It’s obviously more expensive to build your own enclave.

What do you think about other Agile Frameworks, such as Scrum@Scale?
Very different approach from SAFe. Look into it and take what’s best for you. Don’t get stuck into rigid frameworks. If you feel like it’s solving your problem and you are able to adapt to your needs. They are usually able to do a decent job at training you on what to use and not to use. I actually don’t mind that kind of framework because it’s way more flexible and not the one size fits all approach. It’s all about the training and who is training you. Never be ok with any type of one size fits all framework.

Could you identify what you think will be the biggest challenge in the coming year – like what are your biggest/most important goals? And follow up, could you be specific about something(s) that excite you/leadership coming in the next year?
I think the biggest challenge is scale. I think we are doing pretty well, now we need to make this an enterprise thing and I think right now it’s really based on individuals and if we lose some of the people we will be in trouble. So, we want to make this sustainable, we want to make it agnostic to the people. So, that requires a lot of work to create the structure and the team and create a PE for Platform One. There’s going to be a lot of work to make this an enterprise service. The most exciting thing for me is pushing the next gen stuff to the DoD. Yesterday we became the co-lead for DevSecOps for all of the government.

You mentioned adherence to the DevSecOps MVP as being critical for interoperability and other advantages. How will you manage the MVP and adherence to it as tools and techniques grow and change?
That’s the problem. I think the pace of what we have to face here is insane if you look at just the one year of CNCF videos you see a complete move to service mesh a new concept that people didn’t know two years ago. So, how do we keep up, how do we educate people. That’s why we want to piggyback on training from the commercial side. We are partnering directly with the founders of Istio and service mesh and Kubernetes and CNCF and Linux foundation bringing their training, their expertise inside of DoD. Putting their content on the website. Spreading the news, spreading the videos. It’s going to be tough. I think people are going to have to realize how quickly we are to readjust and learn. It’s going to be painful for people that are used to maybe learning new things once every ten years. That’s not going to be the case in a DevSecOps environment. I think it’s going to be an insane pace to a point where it might actually become a problem for people’s health. That’s why I think we need to give an hour a day right now to our people for them take a step back and watch the videos and watch the content to get back on track on the training and make sure they learn about what they don’t know. I think an hour a day is what you need right now to keep up with what is going on.