Skip to content

Commit

Permalink
first commit
Browse files Browse the repository at this point in the history
  • Loading branch information
rolele committed Dec 16, 2016
0 parents commit 86be77c
Show file tree
Hide file tree
Showing 52 changed files with 704 additions and 0 deletions.
1 change: 1 addition & 0 deletions .vagrant/hostmanager/id
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
fff424af-6f89-46a4-9828-257a5b4280ff
1 change: 1 addition & 0 deletions .vagrant/machines/manager0/virtualbox/action_provision
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1.5:efff3d97-6665-4fa3-ad71-4e4679695f37
1 change: 1 addition & 0 deletions .vagrant/machines/manager0/virtualbox/action_set_name
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1481926183
1 change: 1 addition & 0 deletions .vagrant/machines/manager0/virtualbox/creator_uid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
501
1 change: 1 addition & 0 deletions .vagrant/machines/manager0/virtualbox/id
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
efff3d97-6665-4fa3-ad71-4e4679695f37
1 change: 1 addition & 0 deletions .vagrant/machines/manager0/virtualbox/index_uuid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
e6250011e5c3402faaefe07f1146a279
1 change: 1 addition & 0 deletions .vagrant/machines/manager0/virtualbox/synced_folders
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"virtualbox":{"/vagrant":{"guestpath":"/vagrant","hostpath":"/Users/others/glusterfs-ansible-test","disabled":false,"__vagrantfile":true}}}
1 change: 1 addition & 0 deletions .vagrant/machines/manager1/virtualbox/action_provision
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1.5:4bbe96f6-6629-4f56-ba0f-f0c54f38d46b
1 change: 1 addition & 0 deletions .vagrant/machines/manager1/virtualbox/action_set_name
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1481926246
1 change: 1 addition & 0 deletions .vagrant/machines/manager1/virtualbox/creator_uid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
501
1 change: 1 addition & 0 deletions .vagrant/machines/manager1/virtualbox/id
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
4bbe96f6-6629-4f56-ba0f-f0c54f38d46b
1 change: 1 addition & 0 deletions .vagrant/machines/manager1/virtualbox/index_uuid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
e427806ce99b4225919842e3d3fa9e33
1 change: 1 addition & 0 deletions .vagrant/machines/manager1/virtualbox/synced_folders
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"virtualbox":{"/vagrant":{"guestpath":"/vagrant","hostpath":"/Users/others/glusterfs-ansible-test","disabled":false,"__vagrantfile":true}}}
1 change: 1 addition & 0 deletions .vagrant/machines/manager2/virtualbox/action_provision
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1.5:c155bb16-7f9b-4042-bafb-20e08b0deb20
1 change: 1 addition & 0 deletions .vagrant/machines/manager2/virtualbox/action_set_name
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1481926311
1 change: 1 addition & 0 deletions .vagrant/machines/manager2/virtualbox/creator_uid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
501
1 change: 1 addition & 0 deletions .vagrant/machines/manager2/virtualbox/id
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
c155bb16-7f9b-4042-bafb-20e08b0deb20
1 change: 1 addition & 0 deletions .vagrant/machines/manager2/virtualbox/index_uuid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
e0305370aa854b27aef4fc73e4d88566
1 change: 1 addition & 0 deletions .vagrant/machines/manager2/virtualbox/synced_folders
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"virtualbox":{"/vagrant":{"guestpath":"/vagrant","hostpath":"/Users/others/glusterfs-ansible-test","disabled":false,"__vagrantfile":true}}}
1 change: 1 addition & 0 deletions .vagrant/machines/worker0/virtualbox/action_provision
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1.5:82525358-02ac-4338-b844-5bc14b05db53
1 change: 1 addition & 0 deletions .vagrant/machines/worker0/virtualbox/action_set_name
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
1481926373
1 change: 1 addition & 0 deletions .vagrant/machines/worker0/virtualbox/creator_uid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
501
1 change: 1 addition & 0 deletions .vagrant/machines/worker0/virtualbox/id
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
82525358-02ac-4338-b844-5bc14b05db53
1 change: 1 addition & 0 deletions .vagrant/machines/worker0/virtualbox/index_uuid
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
20a6390b375243d4a87ccf16788507a6
1 change: 1 addition & 0 deletions .vagrant/machines/worker0/virtualbox/synced_folders
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"virtualbox":{"/vagrant":{"guestpath":"/vagrant","hostpath":"/Users/others/glusterfs-ansible-test","disabled":false,"__vagrantfile":true}}}
151 changes: 151 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@

```
cd ansible
vagrant up
ansible-playbook --connection=ssh glusterfs.yml
```

The first time I run ``` ansible-playbook --connection=ssh glusterfs.yml``` I get the error bellow that will stop ansible:
````
TASK [glusterfs : Configure Gluster volume.] ***********************************
fatal: [manager0]: FAILED! => {"changed": false, "failed": true, "msg": "error running gluster (/sbin/gluster volume create gluster replica 2 transport tcp manager0:/srv/gluster/brick manager1:/srv/gluster/brick manager2:/srv/gluster/brick worker0:/srv/gluster/brick force) command (rc=1): volume create: gluster: failed: Host worker0 is not in 'Peer in Cluster' state\n"}
````

If I connect to manager0 and check the status I this everything is ok:
```
vagrant ssh manager0
sudo su
[root@manager0 vagrant]# gluster peer status
Number of Peers: 3
Hostname: manager1
Uuid: 7805e0d8-4c29-405b-8603-12bda680cc01
State: Peer in Cluster (Connected)
Hostname: manager2
Uuid: 67d01bf8-5955-44b5-9236-ccc320bc442e
State: Peer in Cluster (Connected)
Hostname: worker0
Uuid: c386cfdb-9430-49b9-8879-44a6879d8c0a
State: Peer in Cluster (Connected)
[root@manager0 vagrant]# gluster pool list
UUID Hostname State
7805e0d8-4c29-405b-8603-12bda680cc01 manager1 Connected
67d01bf8-5955-44b5-9236-ccc320bc442e manager2 Connected
c386cfdb-9430-49b9-8879-44a6879d8c0a worker0 Connected
9702469c-2a35-45fb-92e4-28a6db51eade localhost Connected
```


If I re-run the command the ansible error disapear.
I check the status, nothing has changed
```
[root@manager0 vagrant]# gluster peer status
Number of Peers: 3
Hostname: manager1
Uuid: 7805e0d8-4c29-405b-8603-12bda680cc01
State: Peer in Cluster (Connected)
Hostname: manager2
Uuid: 67d01bf8-5955-44b5-9236-ccc320bc442e
State: Peer in Cluster (Connected)
Hostname: worker0
Uuid: c386cfdb-9430-49b9-8879-44a6879d8c0a
State: Peer in Cluster (Connected)
```


I would like to be able to run the ansible role and it works right away.
I tried adding a pause delay before the failing test but this does not solve the problem.

If you want to run into the error again:
```
cd ansible
yes | vagrant destroy
vagrant up
ansible-playbook --connection=ssh glusterfs.yml
```

full output
```
PLAY [gluster] *****************************************************************
TASK [setup] *******************************************************************
ok: [manager0]
ok: [manager2]
ok: [worker0]
ok: [manager1]
TASK [geerlingguy.glusterfs : Include OS-specific variables.] ******************
ok: [manager1]
ok: [manager2]
ok: [manager0]
ok: [worker0]
TASK [geerlingguy.glusterfs : Install Prerequisites] ***************************
changed: [manager1] => (item=[u'centos-release-gluster'])
changed: [worker0] => (item=[u'centos-release-gluster'])
changed: [manager2] => (item=[u'centos-release-gluster'])
changed: [manager0] => (item=[u'centos-release-gluster'])
TASK [geerlingguy.glusterfs : Install Packages] ********************************
changed: [worker0] => (item=[u'glusterfs-server', u'glusterfs-client'])
changed: [manager0] => (item=[u'glusterfs-server', u'glusterfs-client'])
changed: [manager2] => (item=[u'glusterfs-server', u'glusterfs-client'])
changed: [manager1] => (item=[u'glusterfs-server', u'glusterfs-client'])
TASK [geerlingguy.glusterfs : Add PPA for GlusterFS.] **************************
skipping: [manager0]
skipping: [manager1]
skipping: [manager2]
skipping: [worker0]
TASK [geerlingguy.glusterfs : Ensure GlusterFS will reinstall if the PPA was just added.] ***
skipping: [manager0] => (item=[])
skipping: [manager1] => (item=[])
skipping: [manager2] => (item=[])
skipping: [worker0] => (item=[])
TASK [geerlingguy.glusterfs : Ensure GlusterFS is installed.] ******************
skipping: [manager0] => (item=[])
skipping: [manager1] => (item=[])
skipping: [manager2] => (item=[])
skipping: [worker0] => (item=[])
TASK [geerlingguy.glusterfs : Ensure GlusterFS is started and enabled at boot.]
changed: [manager1]
changed: [worker0]
changed: [manager0]
changed: [manager2]
TASK [glusterfs : Ensure Gluster brick and mount directories exist.] ***********
changed: [manager1] => (item=/srv/gluster/brick)
changed: [worker0] => (item=/srv/gluster/brick)
changed: [manager0] => (item=/srv/gluster/brick)
changed: [manager2] => (item=/srv/gluster/brick)
changed: [manager1] => (item=/mnt/gluster)
changed: [worker0] => (item=/mnt/gluster)
changed: [manager0] => (item=/mnt/gluster)
changed: [manager2] => (item=/mnt/gluster)
TASK [glusterfs : pause] *******************************************************
Pausing for 10 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [manager0]
TASK [glusterfs : Configure Gluster volume.] ***********************************
fatal: [manager0]: FAILED! => {"changed": false, "failed": true, "msg": "error running gluster (/sbin/gluster volume create gluster replica 2 transport tcp manager0:/srv/gluster/brick manager1:/srv/gluster/brick manager2:/srv/gluster/brick worker0:/srv/gluster/brick force) command (rc=1): volume create: gluster: failed: Host worker0 is not in 'Peer in Cluster' state\n"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @/Users/others/glusterfs-ansible-test/ansible/glusterfs.retry
PLAY RECAP *********************************************************************
manager0 : ok=7 changed=4 unreachable=0 failed=1
manager1 : ok=6 changed=4 unreachable=0 failed=0
manager2 : ok=6 changed=4 unreachable=0 failed=0
worker0 : ok=6 changed=4 unreachable=0 failed=0
```

133 changes: 133 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrant 1.7+ automatically inserts a different
# insecure keypair for each new VM created. The easiest way
# to use the same keypair for all the workers is to disable
# this feature and rely on the legacy insecure key.
# config.ssh.insert_key = false
#
# Note:
# As of Vagrant 1.7.3, it is no longer necessary to disable
# the keypair creation when using the auto-generated inventory.

# Requires vagrant-host-shell

VAGRANTFILE_API_VERSION = "2"
MANAGERS = 3
WORKERS = 1
# ANSIBLE_GROUPS = {
# "managers" => ["manager[1:#{MANAGERS}]"],
# "workers" => ["worker[1:#{WORKERS}]"],
# "elk" => ["manager[2:2]"],
# "influxdb" => ["manager[3:3]"],
# "flocker_control_service" => ["manager[1:1]"],
# "flocker_agents" => ["manager[1:#{MANAGERS}]", "worker[1:#{WORKERS}]"],
# "all_groups:children" => [
# "managers",
# "workers",
# "elk",
# "influxdb",
# "flocker_control_service",
# "flocker_agents"]
# }

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.hostmanager.enabled = true
config.hostmanager.manage_host = true
config.hostmanager.manage_guest = true
config.hostmanager.ignore_private_ip = false
config.hostmanager.include_offline = true

config.vm.box = "tsihosting/centos7"

config.vm.provider 'virtualbox' do |v|
v.linked_clone = true if Vagrant::VERSION =~ /^1.8/
end

config.ssh.insert_key = false

(0..(MANAGERS-1)).each do |manager_id|
config.vm.define "manager#{manager_id}" do |manager|
manager.vm.hostname = "manager#{manager_id}"
manager.vm.network "private_network", ip: "192.168.76.#{20+manager_id}"
if manager_id == 1
manager.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 4
end
manager.vm.network "forwarded_port", guest: 12201, host: 12201
manager.vm.network "forwarded_port", guest: 12202, host: 12202
else
manager.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 1
end
end
end
end
(0..(WORKERS-1)).each do |worker_id|
config.vm.define "worker#{worker_id}" do |worker|
worker.vm.hostname = "worker#{worker_id}"
worker.vm.network "private_network", ip: "192.168.76.#{30+worker_id}"
worker.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 1
end


# # Only execute once the Ansible provisioner,
# # when all the workers are up and ready.
# if worker_id == WORKERS

# # Install any ansible galaxy roles
# worker.vm.provision "shell", type: "host_shell" do |sh|
# sh.inline = "cd ansible && ansible-galaxy install -r requirements.yml --ignore-errors"
# end

# #TODO provision should be done via ansible commands not in Vagrantfile
# worker.vm.provision "swarm", type: "ansible" do |ansible|
# ansible.limit = "all"
# ansible.playbook = "ansible/swarm.yml"
# ansible.verbose = "vv"
# ansible.groups = ANSIBLE_GROUPS
# end

# # Addition provisioners are only called if --provision-with is passed
# if ARGV.include? '--provision-with'
# worker.vm.provision "consul", type: "ansible" do |ansible|
# ansible.limit = "all"
# ansible.playbook = "ansible/consul.yml"
# ansible.verbose = "vv"
# ansible.groups = ANSIBLE_GROUPS
# end

# worker.vm.provision "logging", type: "ansible" do |ansible|
# ansible.limit = "all"
# ansible.playbook = "ansible/logging.yml"
# ansible.verbose = "vv"
# ansible.sudo = true
# ansible.groups = ANSIBLE_GROUPS
# end

# worker.vm.provision "monitoring", type: "ansible" do |ansible|
# ansible.limit = "all"
# ansible.playbook = "ansible/monitoring.yml"
# ansible.verbose = "vv"
# ansible.sudo = true
# ansible.groups = ANSIBLE_GROUPS
# end

# worker.vm.provision "apps", type: "ansible" do |ansible|

# # Only need to run against one of the managers since using swarm
# ansible.limit = "managers*"
# ansible.playbook = "ansible/apps.yml"
# ansible.verbose = "vv"
# ansible.groups = ANSIBLE_GROUPS
# end
# end
# end
end
end
end
11 changes: 11 additions & 0 deletions ansible/ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[defaults]
hostfile = hosts
force_color = 1
host_key_checking = False
inventory = development
timeout = 30
roles_path = library:roles

[ssh_connection_type]
ssh_args = -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s

26 changes: 26 additions & 0 deletions ansible/development
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
manager0 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/Users/pierrecaserta/.vagrant.d/insecure_private_key'
manager1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/Users/pierrecaserta/.vagrant.d/insecure_private_key'
manager2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/Users/pierrecaserta/.vagrant.d/insecure_private_key'
worker0 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2202 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/Users/pierrecaserta/.vagrant.d/insecure_private_key'

[managers]
manager[0:2]

[workers]
worker[0:0]

[logging]
manager[1:1]

[monitoring]
manager[1:1]

[gluster]
manager[0:2]
worker[0:0]

[all_groups:children]
managers
workers
logging
monitoring
Loading

0 comments on commit 86be77c

Please sign in to comment.