What Determines Which Services Are Essential For Transit Oriented Development
Principal Trunk
Chapter 13 Database Evolution Process
Adrienne Watt
A core aspect of software technology is the subdivision of the development process into a serial of phases, or steps, each of which focuses on i attribute of the development. The collection of these steps is sometimes referred to equally thesoftware development life cycle (SDLC). The software product moves through this life bicycle (sometimes repeatedly equally it is refined or redeveloped) until it is finally retired from use. Ideally, each stage in the life bike tin be checked for correctness before moving on to the adjacent stage.
Software Development Life Wheel – Waterfall
Let us start with an overview of the waterfall model such as you will observe in virtually software applied science textbooks. This waterfall figure, seen in Figure thirteen.i, illustrates a full general waterfall model that could apply to any computer system development. Information technology shows the process as a strict sequence of steps where the output of one step is the input to the next and all of 1 step has to exist completed before moving onto the side by side.
We tin can use the waterfall process equally a means of identifying the tasks that are required, together with the input and output for each activeness. What is of import is the scope of the activities, which can exist summarized as follows:
- Establishing requirements involves consultation with, and agreement among, stakeholders nigh what they want from a system, expressed as a statement of requirements.
- Analysis starts past considering the statement of requirements and finishes by producing a organisation specification. The specification is a formal representation of what a arrangement should practice, expressed in terms that are independent of how it may be realized.
- Blueprint begins with a arrangement specification, produces pattern documents and provides a detailed clarification of how a system should be constructed.
- Implementation is the construction of a estimator system according to a given blueprint certificate and taking into account the environment in which the system will be operating (eastward.g., specific hardware or software bachelor for the development). Implementation may be staged, usually with an initial system that can be validated and tested before a final arrangement is released for use.
- Testing compares the implemented arrangement against the design documents and requirements specification and produces an credence report or, more usually, a list of errors and bugs that crave a review of the analysis, blueprint and implementation processes to correct (testing is usually the task that leads to the waterfall model iterating through the life cycle).
- Maintenance involves dealing with changes in the requirements or the implementation surroundings, problems fixing or porting of the system to new environments (e.m., migrating a organisation from a standalone PC to a UNIX workstation or a networked environment). Since maintenance involves the analysis of the changes required, pattern of a solution, implementation and testing of that solution over the lifetime of a maintained software system, the waterfall life cycle will be repeatedly revisited.
Database Life Bike
We tin can use the waterfall bicycle as the basis for a model of database development that incorporates iii assumptions:
- We tin can separate the development of a database – that is, specification and creation of a schema to define information in a database – from the user processes that brand use of the database.
- We tin can use the iii-schema architecture as a basis for distinguishing the activities associated with a schema.
- Nosotros can stand for the constraints to enforce the semantics of the data once within a database, rather than within every user process that uses the data.
Using these assumptions and Figure xiii.2, we tin see that this diagram represents a model of the activities and their outputs for database development. It is applicable to whatever course of DBMS, non simply a relational approach.
Database application evolution is the process of obtaining real-earth requirements, analyzing requirements, designing the data and functions of the system, and then implementing the operations in the organization.
Requirements Gathering
The starting time footstep is requirements gathering. During this stride, the database designers have to interview the customers (database users) to understand the proposed system and obtain and document the data and functional requirements. The upshot of this pace is a document that includes the detailed requirements provided by the users.
Establishing requirements involves consultation with, and agreement among, all the users equally to what persistent information they want to store along with an agreement as to the meaning and interpretation of the information elements. The data ambassador plays a key role in this procedure as they overview the business, legal and upstanding problems within the organization that bear on on the information requirements.
The information requirements document is used to confirm the understanding of requirements with users. To make certain that it is easily understood, it should not be overly formal or highly encoded. The certificate should give a concise summary of all users' requirements – not just a collection of individuals' requirements – every bit the intention is to develop a unmarried shared database.
The requirements should not describe how the data is to be candy, just rather what the information items are, what attributes they take, what constraints apply and the relationships that agree between the data items.
Analysis
Data assay begins with the argument of data requirements and so produces a conceptual information model. The aim of analysis is to obtain a detailed description of the data that will conform user requirements so that both high and low level properties of data and their use are dealt with. These include properties such as the possible range of values that can be permitted for attributes (east.m., in the schoolhouse database example, the student grade code, course championship and credit points).
The conceptual data model provides a shared, formal representation of what is being communicated between clients and developers during database development – it is focused on the data in a database, irrespective of the eventual utilise of that data in user processes or implementation of the data in specific computer environments. Therefore, a conceptual information model is concerned with the meaning and structure of data, merely not with the details affecting how they are implemented.
The conceptual data model and then is a formal representation of what information a database should contain and the constraints the data must satisfy. This should be expressed in terms that are contained of how the model may be implemented. As a upshot, analysis focuses on the questions, "What is required?" not "How is it achieved?"
Logical Blueprint
Database design starts with a conceptual information model and produces a specification of a logical schema; this volition make up one's mind the specific type of database organization (network, relational, object-oriented) that is required. The relational representation is still independent of any specific DBMS; information technology is another conceptual data model.
Nosotros can use a relational representation of the conceptual information model as input to the logical blueprint procedure. The output of this stage is a detailed relational specification, the logical schema, of all the tables and constraints needed to satisfy the description of the data in the conceptual data model. It is during this design activity that choices are made equally to which tables are well-nigh appropriate for representing the information in a database. These choices must take into account diverse design criteria including, for example, flexibility for change, command of duplication and how all-time to stand for the constraints. It is the tables divers by the logical schema that make up one's mind what information are stored and how they may be manipulated in the database.
Database designers familiar with relational databases and SQL might be tempted to go directly to implementation afterward they have produced a conceptual data model. Notwithstanding, such a direct transformation of the relational representation to SQL tables does not necessarily event in a database that has all the desirable properties: completeness, integrity, flexibility, efficiency and usability. A good conceptual data model is an essential kickoff footstep towards a database with these properties, just that does not mean that the direct transformation to SQL tables automatically produces a skilful database. This first step volition accurately represent the tables and constraints needed to satisfy the conceptual data model description, and so will satisfy the completeness and integrity requirements, but information technology may be inflexible or offering poor usability. The first design is then flexed to meliorate the quality of the database pattern. Flexing is a term that is intended to capture the simultaneous ideas of bending something for a different purpose and weakening aspects of it as it is bent.
Figure 13.3 summarizes the iterative (repeated) steps involved in database blueprint, based on the overview given. Its main purpose is to distinguish the general effect of what tables should be used from the detailed definition of the constituent parts of each table – these tables are considered 1 at a fourth dimension, although they are not independent of each other. Each iteration that involves a revision of the tables would lead to a new design; collectively they are commonly referred to as 2nd-cutting designs, fifty-fifty if the process iterates for more a single loop.
First, for a given conceptual data model, information technology is not necessary that all the user requirements information technology represents exist satisfied past a single database. There can exist various reasons for the development of more i database, such equally the demand for contained functioning in unlike locations or departmental control over "their" data. However, if the collection of databases contains duplicated data and users need to admission data in more than than one database, and so there are possible reasons that 1 database can satisfy multiple requirements, or issues related to data replication and distribution need to be examined.
Second, one of the assumptions almost database development is that we can separate the development of a database from the development of user processes that brand apply of it. This is based on the expectation that, one time a database has been implemented, all data required by currently identified user processes have been defined and can be accessed; but we also require flexibility to let united states of america to come across futurity requirements changes. In developing a database for some applications, information technology may be possible to predict the common requests that will be presented to the database and so nosotros can optimize our design for the most common requests.
Third, at a detailed level, many aspects of database design and implementation depend on the particular DBMS existence used. If the choice of DBMS is fixed or made prior to the design task, that choice can be used to decide blueprint criteria rather than waiting until implementation. That is, it is possible to incorporate design decisions for a specific DBMS rather than produce a generic design and and then tailor information technology to the DBMS during implementation.
It is not uncommon to find that a single design cannot simultaneously satisfy all the properties of a good database. So it is important that the designer has prioritized these properties (usually using data from the requirements specification); for example, to determine if integrity is more important than efficiency and whether usability is more important than flexibility in a given development.
At the end of our design phase, the logical schema will be specified by SQL information definition language (DDL) statements, which describe the database that needs to exist implemented to run into the user requirements.
Implementation
Implementation involves the structure of a database according to the specification of a logical schema. This will include the specification of an appropriate storage schema, security enforcement, external schema and so on. Implementation is heavily influenced by the choice of available DBMSs, database tools and operating surroundings. There are boosted tasks beyond simply creating a database schema and implementing the constraints – data must be entered into the tables, issues relating to the users and user processes need to be addressed, and the management activities associated with wider aspects of corporate information management need to be supported. In keeping with the DBMS approach, we want every bit many of these concerns as possible to be addressed within the DBMS. We look at some of these concerns briefly now.
In do, implementation of the logical schema in a given DBMS requires a very detailed cognition of the specific features and facilities that the DBMS has to offer. In an ideal world, and in keeping with proficient software engineering practice, the first stage of implementation would involve matching the blueprint requirements with the best available implementing tools and and then using those tools for the implementation. In database terms, this might involve choosing vendor products with DBMS and SQL variants most suited to the database we need to implement. Nonetheless, we don't live in an platonic earth and more often than not, hardware option and decisions regarding the DBMS will have been made well in advance of consideration of the database design. Consequently, implementation can involve additional flexing of the blueprint to overcome any software or hardware limitations.
Realizing the Pattern
Later the logical design has been created, we need our database to exist created co-ordinate to the definitions we have produced. For an implementation with a relational DBMS, this will probably involve the employ of SQL to create tables and constraints that satisfy the logical schema description and the choice of advisable storage schema (if the DBMS permits that level of control).
Ane way to achieve this is to write the advisable SQL DDL statements into a file that can be executed by a DBMS and then that at that place is an independent record, a text file, of the SQL statements defining the database. Another method is to work interactively using a database tool like SQL Server Direction Studio or Microsoft Access. Whatever machinery is used to implement the logical schema, the issue is that a database, with tables and constraints, is defined simply volition contain no data for the user processes.
Populating the Database
After a database has been created, there are ii ways of populating the tables – either from existing data or through the use of the user applications developed for the database.
For some tables, in that location may be existing information from another database or information files. For instance, in establishing a database for a hospital, you would expect that there are already some records of all the staff that take to be included in the database. Data might likewise exist brought in from an outside bureau (address lists are oft brought in from external companies) or produced during a large data entry chore (converting difficult-copy manual records into estimator files tin exist done by a data entry bureau). In such situations, the simplest approach to populate the database is to apply the import and export facilities constitute in the DBMS.
Facilities to import and consign data in diverse standard formats are usually available (these functions are also known in some systems as loading and unloading data). Importing enables a file of data to be copied directly into a table. When data are held in a file format that is non appropriate for using the import function, then it is necessary to prepare an awarding program that reads in the erstwhile data, transforms them as necessary and and so inserts them into the database using SQL code specifically produced for that purpose. The transfer of large quantities of existing data into a database is referred to as a bulk load. Bulk loading of data may involve very large quantities of information being loaded, one table at a time then y'all may find that there are DBMS facilities to postpone constraint checking until the terminate of the bulk loading.
Guidelines for Developing an ER Diagram
Note: These are general guidelines that volition help in developing a strong basis for the actual database pattern (the logical model).
- Document all entities discovered during the information-gathering stage.
- Certificate all attributes that belong to each entity. Select candidate and primary keys. Ensure that all not-key attributes for each entity are full-functionally dependent on the primary key.
- Develop an initial ER diagram and review it with appropriate personnel. (Call up that this is an iterative procedure.)
- Create new entities (tables) for multivalued attributes and repeating groups. Incorporate these new entities (tables) in the ER diagram. Review with appropriate personnel.
- Verify ER modeling past normalizing tables.
analysis: starts past considering the statement of requirements and finishes by producing a system specificationbulk load: the transfer of large quantities of existing data into a database
information requirements certificate:used to confirm the agreement of requirements with the user
design: begins with a system specification, produces design documents and provides a detailed description of how a system should be constructed
establishing requirements: involves consultation with, and agreement amid, stakeholders equally to what they want from a system; expressed every bit a statement of requirements
flexing: a term that captures the simultaneous ideas of angle something for a different purpose and weakening aspects of it as information technology is bent
implementation: the construction of a calculator system according to a given design document
maintenance: involves dealing with changes in the requirements or the implementation environment, problems fixing or porting of the organization to new environments
requirements gathering: a procedure during which the database designer interviews the database user to empathise the proposed organization and obtain and document the information and functional requirements
second-cut designs:the collection of iterations that each involves a revision of the tables that atomic number 82 to a new pattern
software development life cycle (SDLC): the series of steps involved in the database development process
testing: compares the implemented system against the design documents and requirements specification and produces an acceptance report
waterfall model: shows the database development process as a strict sequence of steps where the output of one step is the input to the next
waterfall procedure: a means of identifying the tasks required for database development, together with the input and output for each activity (see waterfall model)
- Describe the waterfall model. List the steps.
- What does the acronym SDLC hateful, and what does an SDLC portray?
- What needs to exist modified in the waterfall model to accommodate database design?
- Provide the iterative steps involved in database design.
Attribution
This affiliate of Database Pattern (including all images, except every bit otherwise noted) is a derivative copy of The Database Evolution Life Wheel past the Open University licensed nether Creative Commons Attribution-NonCommercial-ShareAlike three.0 License.
The following textile was written past Adrienne Watt:
- Key Terms
- Exercises
Source: https://opentextbc.ca/dbdesign01/chapter/chapter-13-database-development-process/
Posted by: kingmempity1975.blogspot.com
0 Response to "What Determines Which Services Are Essential For Transit Oriented Development"
Post a Comment