Delete satellite from repo, run 'icinga2 node update-config', satellite reappears

  • Hello,
    when i'm trying to delete a satellite from my repository, this one reappears after running the command 'icinga2 node update-config'.


    Hosts


    debian-master - master
    debian-satellite - satellite in function
    debian-satellite2 - switched off old satellite


    Delete debian-satellite and debian-satellite2


    Running : icinga2 node update-config
    Check log.txt attachment.


    After running icinga2 node update-config the two satellites reappears on my repository.

    Files

    • log.txt

      (14.97 kB, downloaded 43 times, last: )

    The post was edited 1 time, last by galician ().

  • I aggree, but even if i do node wizard again, these nodes are still appearing. even i removed the icinga package and re installed. still same issue

  • icinga2 repository add/remove is experimental and does not notify icinga node update-config. You'll need to blacklist the host to prevent its re-generation with node update-config.


    This is somewhat buggy, and one of the reasons for its deprecation. Migrate to top down soon, the other mode will be REMOVED by the end of 2017 the latest.

  • i have tried top down approach , i can not make this communication , if i add host in the zones.d dir , icinga is giving error stating "client-endpoint not defined" , i canot add entry manually for each server which is going to be added in future, this should be done through UI . any suggestions ?



    client zones.conf


    object Endpoint "master" {


    host = "113.128.161.116"


    port = "5665"


    }



    object Zone "master" {


    endpoints = [ "master" ]


    }



    object Endpoint "client" {


    host = "113.128.161.118"


    }



    object Zone "client" {


    endpoints = [ "client" ]


    parent = "master"


    }



    object Zone "global-templates" {


    global = true


    }



    in api.conf


    accept_config and accecpt_commands are true



    master zones.conf



    object Endpoint "master" {


    }



    object Zone "master" {


    endpoints = [ "master" ]


    }




    above conf is generated through node wizard, i followed the steps given in the icinga doc.



    Thanks,


    Mahesh

  • Since you've manually defined the configuration inside zones.d/ it wouldn't be much of a hassle to add endpoint/zone to zones.conf, or any other local inclusion directory, i.e. "include_recursive "my-clients-zones"" with generated files in there. Puppet and Ansible modules do the very same. Others might suggest to look into the Director too. There are many ways to achieve automation these days, without manual configuration file edits.

  • Thanks,


    i am trying to setup one master and one clent using topdown approach, i have done master node wizard and client node wizard,

    i have added global zone config in both master and client

    object Zone "global-templates" {

    global = true

    }

    up to this i am clear.


    now i want a service/command to run on the client endpoint from icinga director. If i don't put client zone and endpoint manullay in the master zones.conf file, how the master know where the client is located ? even in UI, in run on agent section , the drop down box doesn't shows client zone and end point . how can i do this in automated way ?


    if this is the case whenever new node is added , we need one entry for their master zones.conf file. right ?

    Am i missing any thing ?

  • Even in the icinga doc every thing is done manullay ? (i aggree that installation and setup has to be done manullay, but configuring hosts and services has to be done through director.)where can i find the automated way of configuring hosts ?

    i have added hosts in the master zone, but i want to execute commands on client.


    I want to add a host template with "run in icinga agent" as yes, and client zones to be populated in drop down box in icinga director. ?

    is it possible ?