Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bridged network failing #921

Closed
jperry opened this issue May 7, 2012 · 69 comments
Closed

bridged network failing #921

jperry opened this issue May 7, 2012 · 69 comments

Comments

@jperry
Copy link

jperry commented May 7, 2012

Hi,

My vagrant box is having issues recently trying to connect through a bridged network. This is the output:

[centos] Matching MAC address for NAT networking...
[centos] Clearing any previously set forwarded ports...
[centos] Forwarding ports...
[centos] -- 22 => 2222 (adapter 1)
[centos] Creating shared folders metadata...
[centos] Clearing any previously set network interfaces...
[centos] Available bridged network interfaces:
1) en0: Ethernet 2
2) en1: AirPort
What interface should the network bridge to? 1
[centos] Preparing network interfaces based on configuration...
[centos] Booting VM...
[centos] Waiting for VM to boot. This can take a few minutes.
[centos] VM booted and ready for use!
[centos] Configuring and enabling network interfaces...
rake aborted!
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

/sbin/ifup eth1 2> /dev/null

Tasks: TOP => test => vagrant:provision => vagrant:up
(See full trace by running task with --trace)
@twirrim
Copy link

twirrim commented Jul 21, 2012

I've been hitting the same problem with CentOS. The vagrant box was produced using the veewee centos 6.2 template, and actually works fine. The interface is up and running and bridged, but Vagrant seems a little confused about it.

I've put a copy of the box up here: http://paulgraydon.co.uk/master.box. The relevant Vagrantfile section:

  config.vm.define :master do |master|
    master.vm.box = "Centos6"
    master.vm.network :bridged
  end

Running that command manually without redirect:

[root@master ~]# ifup eth1

Determining IP information for eth1...dhclient(1250) is already running - exiting. 

This version of ISC DHCP is based on the release available
on ftp.isc.org.  Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.

Please report for this software via the Red Hat Bugzilla site:
    http://bugzilla.redhat.com

exiting.
 failed.

@nickpresta
Copy link

I also sometimes hit the same problem running CentOS 6.0 x86_64 using the Veewee template.

@gregburek
Copy link

I'm getting the same on RHEL 6.2 and CentOS 6.2 guests and I believe that I figured out why this is happening: on RHEL like distros, ifup fails if the interface is already up and has dhclient listening on it.

@twirrim: I booted your box with no eth1 and saw a /etc/sysconfig/network-scripts/ifcfg-eth1 of:

#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
BOOTPROTO=dhcp
ONBOOT=yes
DEVICE=eth1
#VAGRANT-END

This means that before vagrant comes in to change out that file, dhclient has already requested an address and an ifup command will fail.

My RHEL 6.2 box's original /etc/sysconfig/network-scripts/ifcfg-eth1 doesn't exist. The first boot with an additional nic works fine. But on a reload, the VM boots with dhcp configured and dhclient will not allow for a clean ifup:

$ vagrant up
[node1] Importing base box 'rhel62'...
[node1] Matching MAC address for NAT networking...
[node1] Clearing any previously set forwarded ports...
[node1] Forwarding ports...
[node1] -- 22 => 2222 (adapter 1)
[node1] Creating shared folders metadata...
[node1] Clearing any previously set network interfaces...
[node1] Preparing network interfaces based on configuration...
[node1] Booting VM...
[node1] Waiting for VM to boot. This can take a few minutes.
[node1] VM booted and ready for use!
[node1] Configuring and enabling network interfaces...
[node1] Mounting shared folders...
[node1] -- v-root: /vagrant
$ vagrant reload
[node1] Attempting graceful shutdown of VM...
[node1] Clearing any previously set forwarded ports...
[node1] Forwarding ports...
[node1] -- 22 => 2222 (adapter 1)
[node1] Creating shared folders metadata...
[node1] Clearing any previously set network interfaces...
[node1] Preparing network interfaces based on configuration...
[node1] Booting VM...
[node1] Waiting for VM to boot. This can take a few minutes.
[node1] VM booted and ready for use!
[node1] Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

/sbin/ifup eth1 2> /dev/null

Looking at ./plugins/guests/redhat/guest.rb it seems that there is an ifdown command before the ifup, but the network interface seems to stay up. I'll try running vm.channel.sudo("/sbin/ifdown eth#{interface} 2> /dev/null", :error_check => true) to see if it succeeds.

@dcrosta
Copy link

dcrosta commented Aug 23, 2012

I can confirm this behavior on a (different) CentOS 6.2 box, also built from the veewee template. I've confirmed as well that setting ONBOOT=no in /etc/sysconfig/network-scripts/ifcfg-eth1 allows the halt/up cycle to succeed. This is just a work-around, obviously, not a solution to the root problem.

@blalor
Copy link

blalor commented Jan 7, 2013

I'm also having this problem with config.vm.network :hostonly, :dhcp and a CentOS 6.3 box I built myself with Veewee.

@leifmadsen
Copy link

Yep same issue here. Makes working with :bridged interfaces frustrating.

@jphalip
Copy link

jphalip commented Feb 23, 2013

This is probably related to #997 (maybe a duplicate). FWIW, the error went away for me after restarting VirtualBox...

@mitchellh
Copy link
Contributor

Has anyone made progress on this in figuring out a fix? It seems to be that the RedHat guest is "broken" when configuring network interfaces. I'm not a big RedHat or CentOS user so I'm unsure where the root issue is, would love community help here.

@blalor
Copy link

blalor commented Apr 2, 2013

I think the key is in @gregburek's comment above: #921 (comment)

@ellisio
Copy link

ellisio commented Apr 2, 2013

It appears if you get this error, doing the following works:

vagrant ssh
sudo su
vi /etc/sysconfig/network-scripts/ifcfg-eth1

Change ONBOOT=yes to ONBOOT=no then run vagrant reload.

I literally just did this and it worked.

I also have this in my Vagrantfile: config.vm.provision :shell, :path => "developer/networking.sh"

networking.sh:

#!/bin/bash
rm -f /etc/udev/rules.d/70-persistent-net.rules
rm -f /etc/sysconfig/network-scripts/ifcfg-eth1
/etc/init.d/network restart

This is for CentOS 6.4 (Final).

@chilicat
Copy link

chilicat commented Apr 2, 2013

My solution to the problem was to ignore simple the exit code of the ifup command. The advantage of this solution is that if a user reboots the machine (without vagrant) that the network connection will re-established.

vm.communicate.sudo("/sbin/ifup eth#{interface} 2> /dev/null", :error_check => false)

@leifmadsen
Copy link

@chilicat I don't quite follow how you're using that, as 'communicate' seems to be an undefined method.

@alexandrem
Copy link

I have this issue with Centos63, Vagrant 1.1.4, VirtualBox 4.2.10 on OSX Lion.

I am unable to have the private_network working on my guest.

The real problem is that the networking isn't being setup correctly on the guest system. Even if you try to ignore the error, by adding the line:

vm.communicate.sudo("/sbin/ifup eth#{interface} 2> /dev/null", :error_check => false)

to the Vagrant code (plugins/guests/redhad/guest.rb), the network bridge isn't brought up, so your VM isn't reachable by the IP that you try to assign it.

I tried with latest Vagrant 1.2 and it was even worse, I had some "network_scripts_dir" capability error for the redhat guest. I saw that there was a huge refactoring in the network code for all guests, maybe something is broken there, or then it's my system.

Anyway, a real fix would be greatly appreciated!

@mitchellh
Copy link
Contributor

Alright, so the main issue I'm confused about is that RedHat guest does an ifdown prior to the ifup. Why is it failing even though the prior interface went down? Sorry for the somewhat simple questions, I don't have an easily accessible RH box lying around.

@ellisio
Copy link

ellisio commented Apr 7, 2013

My guess (off the top of my head) is that there is something in the network initialization section that reverts the bridged connection on boot, CentOS boots, then Vagrant tries to replace the network configuration and restarts networking. In that time CentOS freaks out and doesn't know where to get its networking information from.

Using the steps I posted above seem to resolve the issue (6.4 w/ Vagrant 1.1.4).

@mitchellh
Copy link
Contributor

Your approach does work, but I'm afraid of changing things to ONBOOT=no by default for people who might reboot their machines outside of Vagrant and lose their networks... I'd love a solution that didn't make that compromise.

@ellisio
Copy link

ellisio commented Apr 7, 2013

I can test when I get into the office tomorrow, but I'm pretty sure that change still holds when running "shutdown -(r,h) now".

The reason we need this is because we have a cluster of VMs for each dev to simulate load balancing and MySQL replication so we need IPs to stick after the VMs come online. (Curse you Percona!)

Sent from my iPhone

On Apr 7, 2013, at 4:26 PM, Mitchell Hashimoto [email protected] wrote:

Your approach does work, but I'm afraid of changing things to ONBOOT=no by default for people who might reboot their machines outside of Vagrant and lose their networks... I'd love a solution that didn't make that compromise.


Reply to this email directly or view it on GitHub.

@blalor
Copy link

blalor commented Apr 8, 2013

Steps to reproduce:

  1. use this Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box = "CentOS-6.3-x86_64-reallyminimal"
  config.vm.box_url = "https://s3.amazonaws.com/1412126a-vagrant/CentOS-6.3-x86_64-reallyminimal.box"
  config.vm.network :private_network, type: :dhcp
end
  1. vagrant up
  2. vagrant ssh
  3. sudo halt
  4. vagrant up fails; result:
Bringing machine 'default' up with 'virtualbox' provider...
[default] Setting the name of the VM...
[default] Clearing any previously set forwarded ports...
[default] Creating shared folders metadata...
[default] Clearing any previously set network interfaces...
[default] Preparing network interfaces based on configuration...
[default] Forwarding ports...
[default] -- 22 => 2222 (adapter 1)
[default] Booting VM...
[default] Waiting for VM to boot. This can take a few minutes.
[default] VM booted and ready for use!
[default] Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

/sbin/ifup eth1 2> /dev/null

@mitchellh
Copy link
Contributor

FIXED! It ended up being a race condition with booting. So I was able to keep the ONBOOT, and just fixed this by adding a retry loop around it. Heh.

78d4d0a

@blalor
Copy link

blalor commented Apr 8, 2013

Woo hoo! Thanks!

@ellisio
Copy link

ellisio commented Apr 8, 2013

Sweeeeeet!!

Sent from my iPhone

On Apr 8, 2013, at 12:09 PM, Brian Lalor [email protected] wrote:

Woo hoo! Thanks!

Reply to this email directly or view it on GitHub.

@leifmadsen
Copy link

👍

@xorti
Copy link

xorti commented Apr 9, 2013

Awesome! Today I've run into that issue, glad that it's now fixed!

@chiefy
Copy link

chiefy commented Apr 15, 2013

Cheetos.

@ellisio
Copy link

ellisio commented Apr 15, 2013

Any ETA on 1.1.6 being released? I'm now seeing this behavior with 1.1.5 on Ubuntu as well...

@chiefy
Copy link

chiefy commented Apr 15, 2013

@awellis13 I just installed the gem from source and it seems to work well, you can do that until an official release is made.

@ellisio
Copy link

ellisio commented Apr 15, 2013

Eh, I'll do that for my personal dev. I need an official release for work so we can go through our Nazi process of pushing new software to our developers. haha

On Apr 15, 2013, at 10:40 AM, Christopher Najewicz [email protected] wrote:

@awellis13 I just installed the gem from source and it seems to work well, you can do that until an official release is made.


Reply to this email directly or view it on GitHub.

@michael-harrison
Copy link

@jprosevear I've turned on the logging and created a gist for your review: https://gist.github.com/michael-harrison/5746092

Following is the Vagrant file in full:

Vagrant.configure('2') do |config|
  config.vm.box = 'puppet_metal'
  config.vm.box_url = 'https://s3-ap-southeast-2.amazonaws.com/ntech-boxes/puppet_metal.box'
#  config.vm.network :forwarded_port, guest: 22, host: 8022
  config.vm.network :private_network, ip: '10.202.202.11'
  config.vm.hostname = 'duster.sp11.ntechhosting.com'

  config.vm.provision :puppet do |puppet|
    puppet.module_path = 'modules'
    puppet.options = '--verbose --debug'
  end

  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "2048"]
  end
end

@jprosevear
Copy link

Need VAGRANT_LOG=DEBUG

@michael-harrison
Copy link

@jprosevear sorry, the gist has been updated with the debug log. I did note the following error:

INFO ssh: Execute: /sbin/ifdown eth1 2> /dev/null (sudo=true)
DEBUG ssh: stdout: ERROR    : [ipv6_test_device_status] Missing parameter 'device' (arg 1)
DEBUG ssh: Exit status: 0

@michael-harrison
Copy link

@jprosevear Quick FYI for the interim I've commented out the network configuration to continue the work that I'm doing.

@jprosevear
Copy link

You have a different error than I fixed:
DEBUG ssh: stdout: Device eth1 does not seem to be present, delaying initialization.

@michael-harrison
Copy link

@jprosevear I agree based on the debug logs. Do you think it's worthwhile raising another issue?

@michael-harrison
Copy link

@Aigeruth The environment variable is already being set in the master (which I did my testing with) but made no real difference to what I'm seeing. Based on my logging the error isn't happening on the ifup, it's on the ifdown: https://gist.github.com/michael-harrison/5746092)

7178 INFO ssh: Execute: /sbin/ifdown eth1 2> /dev/null (sudo=true)
7179 DEBUG ssh: stdout: ERROR : [ipv6_test_device_status] Missing parameter 'device' (arg 1)

@ellisio
Copy link

ellisio commented Jun 14, 2013

Call it a hunch, but if you're having an device not found on ifdown I feel this the base box you're using was not built correctly.

Sent from my iPhone

On Jun 13, 2013, at 9:47 PM, michael-harrison [email protected] wrote:

@Aigeruth The environment variable is already being set in the master (which I did my testing with) but made no real difference to what I'm seeing. Based on my logging the error isn't happening on the ifup, it's on the ifdown: https://gist.github.com/michael-harrison/5746092)

7178 INFO ssh: Execute: /sbin/ifdown eth1 2> /dev/null (sudo=true)
7179 DEBUG ssh: stdout: ERROR : [ipv6_test_device_status] Missing parameter 'device' (arg 1)


Reply to this email directly or view it on GitHub.

@michael-harrison
Copy link

@awellis13 It is possible. The box I'm having problems with is based on two other boxes. By that I mean I have a box created using the 'centos-64-x64-vbox4210-nocm' template from https://github.com/puppetlabs/puppet-vagrant-boxes then used the resulting box to create the second box which is in turn used by the third box (the one having the problem). I'm in the process of rebuilding the second box to see if that resolves the issue.

@michael-harrison
Copy link

@awellis13 I've done the rebuild and it hasn't resolved the issue :( I do think this is more related. I do believe this issues sounds more like #1577

@Aigeruth
Copy link
Contributor

@michael-harrison yes, that commit is on master (1029538), when I pulled that into my fork and pushed it, GH placed this reference automatically (notice that it links to my fork: https://github.com/Aigeruth/vagrant/commit/982010e8d1d53d7187b7dd8b281ad66659b6cdfb), I didn't made this commit

@michael-harrison
Copy link

I've managed to resolve my immediate problem by not specifying network details when creating my base box. So you don't have to re-read the trail of comments here's the steps:

BOX 1
Create the bear metal box using the centos-64-x64-vbox4210-nocm template provided by PuppetLabs (see https://github.com/puppetlabs/puppet-vagrant-boxes).

Once the bear metal box is created package the VM

vagrant package --base centos-64-x64-vbox4210-nocm --output centos-64-x64-vbox4210-nocm.box

BOX 2
Create the base puppet box without any networking configuration (ie no vm.network :private_network, ip: '10.202.202.10') using something like the following Vagrantfile:

Vagrant.configure('2') do |config|
  config.vm.box = 'centos-64-x64-vbox4210-nocm'
  config.vm.box_url = 'https://s3-ap-southeast-2.amazonaws.com/ntech-boxes/centos-64-x64-vbox4210-nocm.box'
  config.vm.hostname = 'puppet.metal.ntechhosting.com'

  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "2048"]
  end
end

Build the box and package it

$ vagrant up
$ vagrant package --base puppet_metal_1371217607 --output puppet_metal.box

Add it to Vagrant

vagrant box add puppet_metal puppet_metal.box

BOX 3
This is where I ran into problems originally. The following Vagrantfile would fail when doing a vagrant up but with the removal of the networking in BOX 2 it succeeds:

Vagrant.configure('2') do |config|
  config.vm.box = 'puppet_metal'
  config.vm.box_url = 'https://s3-ap-southeast-2.amazonaws.com/ntech-boxes/puppet_metal.box'
  config.vm.network :forwarded_port, guest: 22, host: 8022
  config.vm.network :private_network, ip: '10.202.202.12', :adapter => 2
  config.vm.hostname = 'duster.test.ntechhosting.com'

  config.vm.provision :puppet do |puppet|
    puppet.module_path = 'modules'
    puppet.options = '--verbose'
  end

  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id,"--memory", "2048"]
  end
end

Notes
While this doesn't resolve the issue it is a work around. I'm by no means proficient in configuration of networking in CentOS but I was able to identify what was misconfigured in the network using the following script: https://github.com/rodan/fix_eth_order After running the tool on the broken VM I was able to repair the networking using the script but any fixes were wiped out on the next vagrant up. The diagnostic output form the script:

$ sudo -s
$ sh fix_eth_order.sh -b
 * bus id 0000:00:03.0 eth0
 * bus id 0000:00:08.0 eth2 should be eth1
 * NICs are not in pci bus order

When it should look something like:

$ sudo -s
$ sh fix_eth_order.sh -b
 * bus id 0000:00:03.0 eth0
 * bus id 0000:00:08.0 eth1

Hope this helps in resolving the issue at hand :)

@ellisio
Copy link

ellisio commented Jun 14, 2013

Before packaging your base box did you run:

rm -f /etc/sysconfig/networking-scripts/ifcfg-eth1

There is also a persistent-net file somewhere that you should remove but I can't remember it off the top of my head.

Sent from my iPhone

On Jun 14, 2013, at 8:28 AM, michael-harrison [email protected] wrote:

I've managed to resolve my immediate problem by not specifying network details when creating my base box. So you don't have to re-read the trail of comments here's the steps:

BOX 1
Create the bear metal box using the centos-64-x64-vbox4210-nocm template provided by PuppetLabs (see https://github.com/puppetlabs/puppet-vagrant-boxes).

Once the bear metal box is created package the VM

vagrant package --base centos-64-x64-vbox4210-nocm --output centos-64-x64-vbox4210-nocm.box
BOX 2
Create the base puppet box without any networking configuration (ie no vm.network :private_network, ip: '10.202.202.10') using something like the following Vagrantfile:

Vagrant.configure('2') do |config|
config.vm.box = 'centos-64-x64-vbox4210-nocm'
config.vm.box_url = 'https://s3-ap-southeast-2.amazonaws.com/ntech-boxes/centos-64-x64-vbox4210-nocm.box'
config.vm.hostname = 'puppet.metal.ntechhosting.com'

config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end
end
Build the box and package it

$ vagrant up
$ vagrant package --base puppet_metal_1371217607 --output puppet_metal.box
Add it to Vagrant

vagrant box add puppet_metal puppet_metal.box
BOX 3
This is where I ran into problems originally. The following Vagrantfile would fail when doing a vagrant up but with the removal of the networking in BOX 2 it succeeds:

Vagrant.configure('2') do |config|
config.vm.box = 'puppet_metal'
config.vm.box_url = 'https://s3-ap-southeast-2.amazonaws.com/ntech-boxes/puppet_metal.box'
config.vm.network :forwarded_port, guest: 22, host: 8022
config.vm.network :private_network, ip: '10.202.202.12', :adapter => 2
config.vm.hostname = 'duster.test.ntechhosting.com'

config.vm.provision :puppet do |puppet|
puppet.module_path = 'modules'
puppet.options = '--verbose'
end

config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id,"--memory", "2048"]
end
end
Notes
While this doesn't resolve the issue it is a work around. I'm by no means proficient in configuration of networking in CentOS but I was able to identify what was misconfigured in the network using the following script: https://github.com/rodan/fix_eth_order After running the tool on the broken VM I was able to repair the networking using the script but any fixes were wiped out on the next vagrant up. The diagnostic output form the script:

$ sudo -s
$ sh fix_eth_order.sh -b

  • bus id 0000:00:03.0 eth0
  • bus id 0000:00:08.0 eth2 should be eth1
  • NICs are not in pci bus order
    When it should look something like:

$ sudo -s
$ sh fix_eth_order.sh -b

  • bus id 0000:00:03.0 eth0
  • bus id 0000:00:08.0 eth1
    Hope this helps in resolving the issue at hand :)


Reply to this email directly or view it on GitHub.

@michael-harrison
Copy link

Hi @awellis13, I didn't do that but I'll have a look at doing it. If this is needed then I think I'll add it to my puppet scripts to prepare the VM for packaging.

@kanzure
Copy link

kanzure commented Aug 25, 2013

I still experience this problem (with eth1 'not seeming to be present') with vagrant 1.2.7. The solution that worked for me is from one of the veewee definitions for centos:

sed -i /HWADDR/d /etc/sysconfig/network-scripts/ifcfg-eth0
rm /etc/udev/rules.d/70-persistent-net.rules

This was from veewee's cleanup.sh script for centos.

After vagrant reload, the contents of /etc/udev/rules.d/70-persistent-net.rules are:

# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="(mac addr)", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# PCI device 0x8086:0x100e (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="(mac addr)", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

@ellisio
Copy link

ellisio commented Aug 26, 2013

@bryan - Make sure your base box has ran those two commands successfully before building with Vagrant.

If you grabbed those from Veewee I assume you're building it with Veewee? Check your definitions.rb file and make sure cleanup.sh is in the list of scripts to be ran.

Sent from my iPhone

On Aug 25, 2013, at 5:42 PM, Bryan Bishop [email protected] wrote:

I still experience this problem with vagrant 1.2.7. The solution that worked for me is from one of the veewee definitions for centos:

sed -i /HWADDR/d /etc/sysconfig/network-scripts/ifcfg-eth0
rm /etc/udev/rules.d/70-persistent-net.rules
This was from veewee's cleanup.sh script for centos.


Reply to this email directly or view it on GitHub.

@kanzure
Copy link

kanzure commented Aug 26, 2013

@awellis13 Thank you for the heads up, but no I am not using veewee at the moment. I was curious why my veewee builds from before were working, so I compared what veewee was doing against the advice in this bug report thread.

@ellisio
Copy link

ellisio commented Aug 26, 2013

I used to have these issues all the time up until 1.2.0; which is also when I discovered Veewee and started using it.

Try rebuilding your base box and see if the issue goes away.

Sent from my iPhone

On Aug 25, 2013, at 6:31 PM, Bryan Bishop [email protected] wrote:

@awellis13 Thank you for the heads up, but no I am not using veewee at the moment. I was curious why my veewee builds from before were working, so I compared what veewee was doing against the advice in this bug report thread.


Reply to this email directly or view it on GitHub.

@scarolan
Copy link

I'm experiencing the same problem. What's frustrating is that my configuration was working perfectly until a few hours ago. I have no idea what (if anything) changed; it just stopped working this morning after an entire day's successful testing yesterday.

Here's my setup:

OSX 10.8.4
Vagrant 1.3.2
vagrant-vmware-fusion (2.0.5)
Box file (got this from vagrantbox.es:
lxc_box_url = "https://dl.dropbox.com/u/5721940/vagrant-boxes/vagrant-centos-6.4-x86_64-vmware_fusion.box"

Relevant lines from my Vagrantfile:

$bridge_name = "en0: Wi-Fi (AirPort)"

global_config.vm.define "CentOS_VM" do |lxc|
lxc.vm.box = lxc_box
lxc.vm.box_url = lxc_box_url
lxc.vm.network :public_network, :bridge => $bridge_name
lxc.vm.provider :vmware_fusion do |vb|
vb.vmx["memsize"] = "4096"
vb.vmx["numvcpus"] = "2"
end
lxc.vm.provision :shell, :inline => $lxc_host_script
#lxc.vm.provision :shell, :inline => $lxc_guest_script
lxc.vm.provision :shell, :path => "build_guests.sh"
end

Steps to reproduce:

  1. Type 'vagrant up --provider vmware_fusion --provision
  2. Get this error:
    The following SSH command responded with a non-zero exit status.
    Vagrant assumes that this means the command failed!
    ARPCHECK=no /sbin/ifup eth1 2> /dev/null
    Stdout from the command:
    Determining IP information for eth1... failed.
    Stderr from the command:

ifcfg-eth1 has this in it:

#VAGRANT-BEGIN
#The contents below are automatically generated by Vagrant. Do not modify.
BOOTPROTO=dhcp
ONBOOT=yes
DEVICE=eth1
#VAGRANT-END
#VAGRANT-BEGIN
#The contents below are automatically generated by Vagrant. Do not modify.
BOOTPROTO=dhcp
ONBOOT=yes
DEVICE=eth1
#VAGRANT-END
#VAGRANT-BEGIN
#The contents below are automatically generated by Vagrant. Do not modify.
BOOTPROTO=dhcp
ONBOOT=yes
DEVICE=eth1
#VAGRANT-END

@juhaniemi
Copy link

Having this issue with Vagrant 1.4.0, VMware Fusion 6 and CentOS 6.4 64-bit box from Puppetlabs. I tried re-building that box manually (#921 (comment)) but after adding private network settings to Vagrantfile it fails.

I tested this with Vagrant's officially supported precise64 and it works, but I need CentOS base with no luck.

@mjuarez
Copy link

mjuarez commented Dec 19, 2013

It might be worth looking over at this bug regarding ifup and ARPCHECK in Fedora
https://bugzilla.redhat.com/show_bug.cgi?id=974603

@wvega
Copy link

wvega commented Jan 8, 2014

@mjuarez the patch included in the bug report you mentioned fixed the problem for me. Now I'm able to use vagrant up and vagrant reload without issues. Thank you!

I'm running Vagrant 1.4.2 with Vritualbox 4.3.6. The box I'm using is based on the Fedora 19 template from veewee.

For those interested, the following code applies the patch from Bug 974603 on your VM (run as root).

yum -y install patch
wget https://git.fedorahosted.org/cgit/initscripts.git/patch/?id=ce8b72f604079a5516a12f840ed6a64629b0131e -O ifup-eth.patch
patch -p1 -f -b -d /etc < ifup-eth.patch
rm -f ifup-eth.patch

@pporada-gl
Copy link

I packaged a Centos 7.1 virtualbox where I run the following and am still encountering issues starting up a vagrant.

rm -f /etc/udev/rules.d/70-persistent-net.rules;
rm -f /etc/sysconfig/network-scripts/ifcfg-eth1
ln -sf /dev/null /etc/udev/rules.d/70-persistent-net.rules
ln -sf /dev/null /lib/udev/rules.d/75-persistent-net-generator.rules

for ndev in `ls -1 /etc/sysconfig/network-scripts/ifcfg-*`; do
    if [ "`basename $ndev`" != "ifcfg-lo" ]; then
        sed -i '/^HWADDR/d' "$ndev";
        sed -i '/^UUID/d' "$ndev";
    fi
done

The output from the gui shows "work still pending". I can login but no IP address is every assigned. I have the following lines relevant to networking in my Vagrantfile.

config.vm.provider :virtualbox do |vb|
        vb.memory = 1024
        vb.cpus = 1
        vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
        vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
        vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
    end
    config.vm.network "private_network", ip: "192.168.200.42"

I can run vagrant destroy -f and vagrant up till the cows come home and SOMETIMES I can get the vagrant to actually start successfully.

@ellisio
Copy link

ellisio commented Oct 13, 2015

@pporada-gl Try using the following. It's a snippet I found somewhere at some point to package our CentOS 7.1 VM using Packer. This is in our cleanup.sh where we also run the dd command.

# ref https://github.com/box-cutter/centos-vm/
/bin/systemctl stop NetworkManager.service
for ifcfg in `ls /etc/sysconfig/network-scripts/ifcfg-* | grep -v ifcfg-lo` ; do
  rm -f $ifcfg
done

cat <<EOF | cat >> /etc/rc.d/rc.local
LANG=C
for con in \`nmcli -t -f uuid con\`; do
  if [ "\$con" != "" ]; then
    nmcli con del \$con
  fi
done
gwdev=\`nmcli dev | grep ethernet | egrep -v 'unmanaged' | head -n 1 | awk '{print \$1}'\`
if [ "\$gwdev" != "" ]; then
  nmcli c add type eth ifname \$gwdev con-name \$gwdev
fi
chmod -x /etc/rc.d/rc.local
EOF

chmod +x /etc/rc.d/rc.local

rm -f /etc/ssh/ssh_host_*
rm -f /var/lib/NetworkManager/*

@ellisio
Copy link

ellisio commented Oct 13, 2015

@pporada-gl Actually, it appears this is the root cause of your issue. I just added it to our Vagrantfile to test and it caused the SSH timeout to occur:

vb.customize ["modifyvm", :id, "--nictype1", "virtio"]

Remove that and see if you can boot.

@pporada-gl
Copy link

@ellisthedev,
You're my hero today.

@ellisio
Copy link

ellisio commented Oct 13, 2015

@pporada-gl Cheers mate, happy coding!

@ghost ghost locked and limited conversation to collaborators Apr 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests