Yes, but when you assign a "production" image to an ironic bare metal node. You should provide *ramdisk_id and kernel_id. * Should the *ramdisk_id and kernel_id* be the same as deploy images (aka the first set of k+r) ? You didn't answer me if the two sets of r + k should be the same ?
*Best Regards!* *Chao Yan--------------**My twitter:Andy Yan @yanchao727 <https://twitter.com/yanchao727>* *My Weibo:http://weibo.com/herewearenow <http://weibo.com/herewearenow>--------------* 2014-06-04 21:27 GMT+08:00 Dmitry Tantsur <[email protected]>: > On Wed, 2014-06-04 at 21:18 +0800, 严超 wrote: > > Thank you ! > > > > I noticed the two sets of k+r in tftp configuration of ironic. > > > > Should the two sets be the same k+r ? > Deploy images are created for you by DevStack/whatever. If you do it by > hand, you may use diskimage-builder. Currently they are stored in flavor > metadata, will be stored in node metadata later. > > And than you have "production" images that are whatever you want to > deploy and they are stored in Glance metadata for the instance image. > > TFTP configuration should be created automatically, I doubt you should > change it anyway. > > > > > The first set is defined in the ironic node definition. > > > > How do we define the second set correctly ? > > > > Best Regards! > > Chao Yan > > -------------- > > My twitter:Andy Yan @yanchao727 > > My Weibo:http://weibo.com/herewearenow > > -------------- > > > > > > > > 2014-06-04 21:00 GMT+08:00 Dmitry Tantsur <[email protected]>: > > On Wed, 2014-06-04 at 20:29 +0800, 严超 wrote: > > > Hi, > > > > > > Thank you very much for your reply ! > > > > > > But there are still some questions for me. Now I've come to > > the step > > > where ironic partitions the disk as you replied. > > > > > > Then, how does ironic copies an image ? I know the image > > comes from > > > glance. But how to know image is really available when > > reboot? > > > > I don't quite understand your question, what do you mean by > > "available"? > > Anyway, before deploying Ironic downloads image from Glance, > > caches it > > and just copies to a mounted iSCSI partition (using dd or so). > > > > > > > > And, what are the differences between final kernel (ramdisk) > > and > > > original kernel (ramdisk) ? > > > > We have 2 sets of kernel+ramdisk: > > 1. Deploy k+r: these are used only for deploy process itself > > to provide > > iSCSI volume and call back to Ironic. There's ongoing effort > > to create > > smarted ramdisk, called Ironic Python Agent, but it's WIP. > > 2. Your k+r as stated in Glance metadata for an image - they > > will be > > used for booting after deployment. > > > > > > > > Best Regards! > > > Chao Yan > > > -------------- > > > My twitter:Andy Yan @yanchao727 > > > My Weibo:http://weibo.com/herewearenow > > > -------------- > > > > > > > > > > > > 2014-06-04 19:36 GMT+08:00 Dmitry Tantsur > > <[email protected]>: > > > Hi! > > > > > > Workflow is not entirely documented by now AFAIK. > > After PXE > > > boots deploy > > > kernel and ramdisk, it exposes hard drive via iSCSI > > and > > > notifies Ironic. > > > After that Ironic partitions the disk, copies an > > image and > > > reboots node > > > with final kernel and ramdisk. > > > > > > On Wed, 2014-06-04 at 19:20 +0800, 严超 wrote: > > > > Hi, All: > > > > > > > > I searched a lot about how ironic > > automatically > > > install image > > > > on bare metal. But there seems to be no clear > > workflow out > > > there. > > > > > > > > What I know is, in traditional PXE, a bare > > metal > > > pull image > > > > from PXE server using tftp. In tftp root, there is > > a ks.conf > > > which > > > > tells tftp which image to kick start. > > > > > > > > But in ironic there is no ks.conf pointed > > in tftp. > > > How do bare > > > > metal know which image to install ? Is there any > > clear > > > workflow where > > > > I can read ? > > > > > > > > > > > > > > > > > > > > Best Regards! > > > > Chao Yan > > > > -------------- > > > > My twitter:Andy Yan @yanchao727 > > > > My Weibo:http://weibo.com/herewearenow > > > > -------------- > > > > > > > > > > > _______________________________________________ > > > > OpenStack-dev mailing list > > > > [email protected] > > > > > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > [email protected] > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > _______________________________________________ > > > OpenStack-dev mailing list > > > [email protected] > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > [email protected] > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > _______________________________________________ > > OpenStack-dev mailing list > > [email protected] > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________ > OpenStack-dev mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
