Testing Centric Life Cycle

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Could there be such a thing as a Testing Centric Life Cycle?

What would it have to look like to be Testing Centric?  Wouldn’t we have to ask questions of each Deliverable to prove that it somehow supports what we believe our end goal to be not knowing all that they might be in the earlier stages?  How would we construct these questions to ask of our Life Cycle Deliverables so that the answers would be good indications of our reasons to proceed?

Imaging if we were to create a Statement of Work for a project intended to build a new business application for an organization.  If we wanted to TEST this SOW, what questions would we ask of it that could help us understand that its content was supportive of our end goals?

It might be possible for us to start asking questions about the Scope described in this Statement of Work:

  1. Does the scope indicate where our input data will be coming from?
  2. Does the scope explain how our product / application will affect the data?
  3. Does it show how the affected data will provide additional value over our inputs?
  4. Can we tell from the scope what external events and/or operators our application will have to respond to?
  5. Can we predict how we will have to maintain the data collected, modified, reported, used to accomplish the intended functions of our product / application?
  6. Is it Orthogonal?  (Not a trick question; read other portions of this blog!)

If we accept the notion that each Deliverable in the path of our project plan must answer questions that relate to its completeness and its alignment with previous deliverables, then the Test Centric Life Cycle Model can be strongly directed by the premise behind Orthogonality:

  • All modeling perspectives MUST be reflected in all Other Modeling perspectives so that when each component and its related model components are removed, nothing remains in any of the models
  • Model perspectives should address the following disciplines or viewpoints:
    • Event Diagram
    • Process Diagram
    • Data Model Diagram
    • State Transition Diagram

Do you think, dear reader, that this thread of discussion should continue or be dropped?

Please Respond / Comment on this post OR Reply to me at whendoyoustoplooking@gmail.com

Thanx, bgbg

Advertisements

Orthogonal Dependencies

Tags

, , , , , , , , , , , , , , , , , , , , , , , , ,

  • When an Event occurs that your application must respond to
    • a Process is performed
      • Each Process takes the data from an Event and Stores it in a Data Store
        • Where it’s State is established
  • There are Data States that require your application responds to
    • By Performing a Process
      • Which collects the related Data and makes changes
        • Where the State of the Data records are changed
  • And there are Business Transactions that require your application to
    • Perform a Process
      • That gather relevant Data and makes updates to these Data
        • Where the State of the Data Records are changed

Build a set of models that leave no Event, State, or Transaction without response and you have built a Complete Solution.

Infrastructure Project Management

Tags

, , , , , , , , , , , , , , , ,

There could be hundreds or thousands of components to change, migrate, upgrade, affect somehow in an Infrastructure Project.

There might be a very finite list of things to apply to each set of the Infrastructure and these changes will have to be applied in the same manner for each component.

Once you set up a process for applying these changes OR migrating servers, data bases, web apps, etc., this process will need to be repeated with Quality, Precision, Dependency and BOREDOM for each of the components that will be affected.

It would be much more important to manage the BOREDOM than the other aspects once your team gets started making these changes.  Of course, in some cases, you COULD acquire propagation software that would execute a script to make your changes over and over again and, therefore, reduce the BOREDOM.

But, what else could we as Project Managers do to increase the likelihood of consistent, reliable results?

Did you say: “What’s a WalkThrough?”?

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

For some, a walk through has to do with looking at a new apartment or house that they are thinking about moving into.  For others, it could be an opportunity to STEP all over somebody else’s work.

For me, many years ago, when Programmers used Coding Pads and had to handle very large decks of cards so they could get their programs compiled for Desk Checking, a Walk Through was a very important part of preparing to write that program along with flowcharting, doing record and print layouts, and trying to figure out just exactly what the user might want considering how little  they would have actually said to me about what they need this program to do for them.

So, a typical Walk Through scenario could be something like this:

  • User wants a new report from an existing file of data
  • User tells programmer’s boss what they need and boss writes down what he/she thinks they asked for
  • Boss calls out to programmer

Watch this space: I’m writing AGAIN!!!

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I just want you all to know that I have started writing again. Recently, I have spent a lot of time NOT writing (did have some other things to take care of, and still do), but it appears I have the bug again!

So, I want to let all of you, my readers, know that I am starting again with new materials AND I also want you to know that I have several blogs I am writing on:

Project Management Handbook: http://www.projectmanagementhandbook.wordpress.com

BG Opinions: http://www.opinionsarefree.wordpress.com

Sixty Somethin: http://www.sixtysomethin.wordpress.com

Please check these out and I hope that you continue to enjoy reading, and please send me your comments or suggestions at whendoyoustoplooking@gmail.com or some of my other email addresses for these blogs.

Thanx, bgbg

On Conceptual Modeling: 1984; Brodie, Mylopolous, Schmidt

Tags

, , , , , , , , , , , , , , , , , , , , , , , ,

From a recent review of this book (see Blog Post Title), I have a question:

Main Question:  Is it more efficient to build the indexes in support of invisible keys or to store this same data in each row of a set of tables?

What is an Invisible Key?

A row of data must have fields included that uniquely identify each individual row.  For example, a row in a “Person” data object will be identified by some combination of fields containing: Name, Tax identifier, Country of Origin, Birth Date, etc.  No one of these data fields would be enough to uniquely identify each “Person” in the object, but a combination will usually work well.

Each of these fields has a certain specific size or number of characters allowed at a maximum or absolute size: Birth Date could be represented by the following data pattern: yyyymmdd; whereas the Tax Identifier could be different sizes depending on the Country of Origin.

The notion of Invisible Key is that even though these data fields are used to uniquely identify each row in an object, the specific data values for each row of data, the birth date field, do not have to be physically present in the row in order to find it in a search or inquiry.  As a simple example, let’s use the Birth Date field to describe how this is possible.

The date field, as described above, has 8 characters in it which we should agree can identify each and every Birth Date for the foreseeable past and future.  This statement should be true from January 1, 0000 through and including December 31, 9999.  The dates prior to the first recordable year would need another representation that we will not include in these discussions.  And, the dates beyond December 31, 9999 will need a conversion/expansion similar to the Y2K efforts which some of us may remember.

The allowable values in each of the characters in our Birth Date field are limited to a range of numbers from Zero (0) to Nine (9) only.  If we were to build a data base that would maintain an individual index for each of these data characters so that each of these indices has a list of all data rows that have a Zero in position one, One in position one, Two in position one, etc. through Nine in position one; and repeat these lists for each position in our data field: Position One through Eight, then we can select the lists of indices that we will want to look for in a search:

  • Year 2010 will use a search for:
    • All records where Birth Date Position One equals “2”
      and Birth Date Position Two equals “0”
      and Birth Date Position Three equals “1”
      and Birth Date Position Four equals “0”
    • The result of this search will be a list of all record locations that contain 2010 in the first four positions of Birth Date!

Using this type of inverted list index, we can extract any Birth Date or range of Birth Dates based on the content and record locations in the Indices rather than requiring that that same content be present in each row of data!

As far as the original question, I would like to know how much space would be required to maintain the List Indices for all these data positions versus how much space will be required to maintain these same data in both an index structure AND the data rows themselves.

If we take the inventory of fields proposed earlier and guess at their physical sizes then each row of data represents (and could contain) the following number of characters:

  • Last Name                   Char (25)
  • Middle Name                Char (15)
  • First Name                   Char (15)
  • Country of Origin         Char (25)
  • Tax Identifier               Char (15)
  • Birth Date                    Char (8)

For a total number of positions of: 103 characters.

Multiplying this number of positions by the number of “People” in our data collection could be a staggering sum of physical space in a data object.  For example, ten thousand (10,000) people in an “Employee” data base would require over One Million (1,030,000) characters.  That may not be a ‘staggering’ number but what if we are talking about the “person” data for an Electronic Medical Records data object servicing a National or Global Health Care application?  Three hundred billion people (300,000,000,000) would require almost 31 Giga Bytes (30,900,000,000,000 characters)!  And, this is just for the data values that might need to be indexed.

So what would it take in physical space to represent the indices for these same data fields/values?

The simple truth about THAT is that I don’t know enough about physically building indices to answer THAT question.  Can you?  Please?

The answer to this question, in my opinion, can have a significant effect on the approach and the ability of the technical community to respond to the organizational demands for an ever increasing amount of data for their analytical needs.

“I’m just asking…”

Hello World! Thank You, World!

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve been writing random and sometimes organized thoughts in this blog for quite a while now and I just took a closer look at the statistics of people who have read my stuff.

So, I want to Thank every one of you who have read my materials and I hope that you all continue to take benefit from or enjoy these materials.

Here is a graphic of the geographical reach that I have benefited from and I would like each of you to know that I appreciate every minute you take to follow me, read this stuff, and, hopefully comment on and invite others you know to read this same information:

Image

Thank you, bgbg

The Data Manifesto

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , ,

All Data Are Protected (ala The Matrix)

An Event occurs that is associated with a collection of data attributes that need protection.

Each unique Event has a finite list of Data that are expected to occur when a new instance of an Event happens.  In almost all cases, that list of Data includes, at least, the Date, Time, Location, and Instigator of the Event.  Other Data are also needed depending on the specific Type or Name of each Event.  This list of proscribed Data for a unique Event is called a Row.  When each Row first expresses as an Event it is considered an Unprotected Row.

Event Data requires that it receive protection before becoming in three levels:

  1. Content Protection
  2. Field Relationship Protection
  3. Object Relationship Protection

Once a set of Event Data or each Unprotected Row passes all three of these levels, it can become a legitimate Transaction and take its place in its Legitimate Protected Transaction Row Object to be processed against its Master Data Object(s).

Protection Level One: Content

Each Data has a pre-determined set of Valid Values that are anticipated and allowed in that Data.  If a Data contains Value that is not in the anticipated Valid Value set, then the Unprotected Row that represents this particular Event is sent to “Corrections” for further processing.  This set of Event Data CANNOT be processed as a Level One Unprotected Row.

Further, each Unprotected Row for a unique Event will have Data that are defined as Primary and/or Partial Key Data which is used to uniquely identify each instance of every Event occurrence.

Valid Unprotected Rows that pass this Level One Protection can proceed to Level Two Protection validation.

Protection Level Two: Field Relationship

Some Data have Values that require or limit the selection of Valid Values for other Data in its Unprotected Row.  If a Data Value prescribes that other Data be limited to a subset of their anticipated Valid Values then those Data must be validated based on that limited subset.

If a related Data does not comply with its limited Valid Value set, then the Unprotected Row that represents this particular Event is sent to “Corrections” for further processing.  This set of Event Data CANNOT be processed as a Level Two Unprotected Row.

Valid Unprotected Rows that pass this Level Two Protection can proceed to Level Three Protection Validation.

Protection Level Three: Object Relationship

Successful Level One and Level Two Unprotected Rows must have Relationships with other Objects in the Model by using the Valid Values that are in their Data that are part of their Primary or Partial Key Data.  The Values in these Primary and/or Partial Key Data have specific Relationship Rules to other named Objects that must be Validated.

Optional Relationship Rules need NOT be Validated.

The Relationships that are marked as Mandatory MUST find Protected Rows in named Objects elsewhere in the Model.  If these Mandatory Relationships are not proven to exist then the Unprotected Row is sent to “Corrections” for further processing.  This set of Event Data CANNOT be processed as a Legitimate Protected Transaction.

Valid Unprotected Rows that pass this Level Three Protection will be inserted in the Legitimate Protected Transaction Row Object that this Type of Event is assigned to for further processing.

All Data Are Protected.

Standards for Cloud Computing

Tags

, , , , , , , , , , , , , , , , , ,

I believe things get “REAL” when there are standards and guidelines for those practitioners who are interested in moving things forward.

I found this article talking about creating standards for Cloud Computing and I agree whole-heartedly!

http://serion.co.nz/blog/hybrid-and-cloud-computing-standards

Thanx for reading and enjoy the article.

If you can, be prepared to join the debate.

bgbg