Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support BuildConfigs for openshift provider #96

Closed
surajssd opened this issue Aug 10, 2016 · 16 comments
Closed

Support BuildConfigs for openshift provider #96

surajssd opened this issue Aug 10, 2016 · 16 comments

Comments

@surajssd
Copy link
Member

When the provider is selected as openshift and if docker-compose file has build directive then create a buildconfig out of it, along with deploymentconfig and service

@rtnpro
Copy link
Contributor

rtnpro commented Aug 12, 2016

I want to work on this issue.

@surajssd
Copy link
Member Author

@rtnpro I am thinking of handling builds in multiple ways:

  • When user has git repo create a buildconfig and imagestream using that git directory info
  • But a user might also wanna do a local build, so he wants build to happen using dockerfile, so i am working on a parallel build story which will build docker images using imagebuilder and that image names can be added to deploymentconfig and made to use those images rather than pulling it from docker hub, so I am planning to add a flag to kompose cli which does this, but this assumes that user will be running a single node cluster using minikube or minishift or CDK and user will have to export some variables that enables us to talk to remote docker daemon.

@kadel
Copy link
Member

kadel commented Aug 23, 2016

But a user might also wanna do a local build, so he wants build to happen using dockerfile, so i am working on a parallel build story which will build docker images using imagebuilder and that image names can be added to deploymentconfig and made to use those images rather than pulling it from docker hub, so I am planning to add a flag to kompose cli which does this, but this assumes that user will be running a single node cluster using minikube or minishift or CDK and user will have to export some variables that enables us to talk to remote docker daemon.

To be honest, I don't like assumption that user is using single node cluster.
And even than I don't think that majority of users are going to run kompose on same machine where their single node cluster is running. For example if I'm using minikube I'm never going to run kompose in minikube VM.

@surajssd
Copy link
Member Author

@kadel

  • We assume that user will run a single node only when that flag is provided
  • We can do builds on that cluster machine, only if the environment variables are exported, using imagebuilder. Remote builds are possible once we export those environments.
  • User will be running kompose on his machine not on the cluster machine.

@kadel
Copy link
Member

kadel commented Aug 23, 2016

My issue with this is that it is quite limiting :-( This requires that you have single node cluster, and that you have remote access to that docker daemon.

Wouldn't be better to build that image locally and that push it to OpenShift's registry? We have done similar thing in OpenShift2Nulecule. This will also work with clusters. OpenShift has already internal registry and for Kubernetes we can use its registry addon as mentioned in #97 (comment)

@surajssd
Copy link
Member Author

surajssd commented Sep 7, 2016

So going ahead we right now only implement what OpenShift by default provides, which is detect the source code's git remote and create buildConfigs with that info in buildconfig artifact.

Further we still have to work on doing local builds without having to rely on remote git repo.

@kadel
Copy link
Member

kadel commented Sep 7, 2016

So going ahead we right now only implement what OpenShift by default provides, which is detect the source code's git remote and create buildConfigs with that info in buildconfig artifact.

Just to explain this more deeply how this is example who will work:
I have cloned repo github.com/foo/bar on my local machine.
In this repo is docker-compose.yml that looks like this:

versoin: 2
services:
   foo:
      build: ./

When you run kompose on this file, kompose will try to detect remote of this git and use it as source for build.
generated BuildConfig will probably look something like this:

  apiVersion: v1
  kind: BuildConfig
  metadata:
    name: foo
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: foo:latest
    source:
      type: Git
      git:
        ref: master
        uri: https://github.com/foo/bar
      contextDir: ./
    strategy:
      dockerStrategy:
        type: Docker
        from:
          kind: ImageStreamTag
          name: foo:from
    triggers:
      - type: ConfigChange
      - type: ImageChange

This means that build is not from local directory but from remote git repository.
Only what is committed and pushed to remote is build.

@dustymabe
Copy link
Contributor

@sebgoa can you please assign @rtnpro to this issue?

@sebgoa sebgoa assigned procrypt and unassigned procrypt Oct 5, 2016
@sebgoa
Copy link
Contributor

sebgoa commented Oct 5, 2016

@rtnpro you need to accept the invitation to the kompose contributor team

@sebgoa
Copy link
Contributor

sebgoa commented Oct 5, 2016

just a thought, is there a way to run docker in docker in a Kubernetes pod ?

@dustymabe
Copy link
Contributor

just a thought, is there a way to run docker in docker in a Kubernetes pod ?

it would require root access, so it depends on how you have the cluster set up. I think overall it's not going to be an option, though.

@kadel
Copy link
Member

kadel commented Oct 19, 2016

Hi @bgrant0607, what was reason for removing @rtnpro from this? Or it was just something that happened during move to incubator?

@bgrant0607
Copy link
Member

@kadel It just happened during the move. Only org members can be assigned to issues and PRs. I had to invite all kompose contributors to the kubernetes-incubator org.

@kadel
Copy link
Member

kadel commented Oct 20, 2016

@bgrant0607 Thank you. I thought that this was it.

@pradeepto
Copy link

Can this issue be closed via #206 ? cc @surajssd @rtnpro

@surajssd
Copy link
Member Author

surajssd commented Jan 2, 2017

since #206 is merged, closing this in favour of more specific issue #353

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants