The ideal approach to building a CMDB is to understand who the target audience is and the types of questions they are going to ask the CMDB. While being able to store every bit of information in the CMDB is somewhat do'able, it will quickly become a major headache to maintain the data and accuracy of the information. If you are unable to keep the data fresh and correct, the value of the CMDB deminishes and it will not be used.
One common use case is around Change Management. When a change request is being reviewed, the current configuration needs to be compared against the change requested (IE: changing the IP address, or upgrading a daemon process) and/or reviewal of the potential business, service, or customer impact.
Outage triage is another area where the administrator can look at the current configuration of the service or any recent changes over the last week that may have caused the outage (or planned change that might fix the outage).
Regardless of the usage, understanding the types of questions the users are going to ask of the CMDB is very important. You may want to expand the scope slightly in order to allow for other groups, departments, etc to adopt the CMDB within their processes... but don't go crazy by trying to store everything ever known. Remember, you have to be able to maintain the data. Typically it is easier to add on to the CMDB additional CI types, attributes and relationships as new groups of users within the enterprise want to use the CMDB for other types of information.
For population of the CMDB, one common approach is to leverage other systems (Asset Management, Help Desk, Network Management Tools, Discovery, etc) to do intial population of the CMDB and potentially maintain the data over time. Regardless if it is a home grown application or an off the shelf product that provides automated discovery information, there are few options to integrate them into the CMDB. There might be specific attributes and relationships/dependencies that will need to be maintained manually which is fine (IE: Business Owner, Contact information, etc), but ensure that there is a clean process in place to protect the data quality.
Novells approach to making the CMDB easy to maintain and use is around the following highlevel features.
• User interface to administer the types of CI's to store, the attributes per type of CI as well as the different types of relationships that need to be tracked. With attribute validation rules, relationship rules and CI flagging capabilities, data quality can be greatly improved.
• An importable CIM metamodel for those who wish to leverage this standard. It's a rather large and we typically recommend to use a reduced version of it.
• Several out of the box integration (Adapters, Integration Modules) options which are intended to be leveraged for populating and maintaining the CI's, Attributes and Relationships on a routine basis. The ETL approach some vendors use can be tough to set up and may not provide the data in a timely manner.
• CMDB Interface intended to be used in a user community type of model where people with common responsibilities can work together to manage and maintain common CI's (databases, applications, networking, etc), yet open enough to share the CMDB with the enterprise.
Have a vision, understand the end users use cases. For those with vasts amounts of data that can be leveraged, don't get stuck in the weeds of architecting a solution that makes the CMDB anything and everything to everyone, but have a decent plan. Many times I have seen CMDB projects that linger on for well over a year without any clear indication of "done". Pick an area, pick the use cases, determine the possible integration points that can automate the importing and maintaining of the data, nail down the required CI types, Attributes per CI, relationship types per CI and drive forward. Typically once you get one or two areas going and have validation points for ROI, data quality and usage, expanding into other areas is easier.