代写范文

留学资讯

写作技巧

论文代写专题

服务承诺

资金托管
原创保证
实力保障
24小时客服
使命必达

51Due提供Essay,Paper,Report,Assignment等学科作业的代写与辅导,同时涵盖Personal Statement,转学申请等留学文书代写。

51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标
51Due将让你达成学业目标

私人订制你的未来职场 世界名企,高端行业岗位等 在新的起点上实现更高水平的发展

积累工作经验
多元化文化交流
专业实操技能
建立人际资源圈

Project_Report_for_.Net_Project

2013-11-13 来源: 类别: 更多范文

A PROJECT REPORT ON XXXXXXXXXX Submitted to Osmania University for the partial fulfillment of the Requirement for the Award of Degree for XXXXXXXXXXXXXXXXXXXXXXXX Done by Mr. XXXXXX XXXXX Institute of Management & Computer Sciences Hyderabad CERTIFICATE This is to certify that Mr. XXXX, bearing Roll No. XXXXXXXXXXX have developed Software project titled XXXXXXXX for XXXXX SOFTWARE SOLUTIONS as a partial Fulfillment for the award of the Degree of XXXXXXX. HEAD OF DEPARTMENT PRINCIPAL XXXXX Institute of Management & Computer Sciences EXTERNAL ACKNOWLEDGEMENT My heartful gratitude and thanks to Almighty God, my parents and other family members and friends without whose unsustained support, I could not have made this career in XXXX. I wish to place on my record my deep sense of gratitude to my project guide, Mr. XXXXX, Xxxxx Software Solutions, Hyderabad for his constant motivation and valuable help through the project work. Express my gratitude to Mr. XXXX, Director of XXXXX Institute of Management & Computer Sciences for his valuable suggestions and advices through out the XXX course. I also extend my thanks to other Faculties for their cooperation during my Course. Finally I would like to thank my friends for their cooperation to complete this project. XXXXXXX 1. Introduction 1. Introduction to GSS 2. Introduction to Project 3. Introduction to Module 2. Analysis 1. Identification of Need 2. Investigation 3. Feasibility Study 4. Problem Specification 5. Requirement Specification 1. client requirements 2. Hardware requirements 3. software requirements 3. Selected Software 4. Design 4.1 Software Paradigm 4.2 Normalization 4.3 Data Dictionary 4.4 Relationship Diagram 4.5 E-R Diagrams 4.6 Data Flow Diagrams. 5. Output Screens (Forms) 6. Reports 7. System Testing and Implementation 7.1 Test Data 7.2 Validations 8. Conclusion 9. Scope for Expansion 10. Bibliography XXXXX SOFTWARE SOLUTIONS Xxxxx Software Solutions is an IT solution provider for a dynamic environment where business and technology strategies converge. Their approach focuses on new ways of business combining IT innovation and adoption while also leveraging an organization’s current IT assets. Their work with large global corporations and new products or services and to implement prudent business and technology strategies in today’s environment. Xxxxx ’s range of expertise includes : • Software Development Services • Engineering Services • Systems Integration • Customer Relationship Management • Product Development • Electronic Commerce • Consulting • IT Outsourcing We apply technology with innovation and responsibility to achieve two broad objectives: • Effectively address the business issues our customers face today. • Generate new opportunities that will help them stay ahead in the future. This approach rests on: • A strategy where we architect, integrate and manage technology services and solutions - we call it AIM for success. • A robust offshore development methodology and reduced demand on customer resources. • A focus on the use of reusable frameworks to provide cost and times benefits. They combine the best people, processes and technology to achieve excellent results - consistency. We offer customers the advantages of: Speed: They understand the importance of timing, of getting there before the competition. A rich portfolio of reusable, modular frameworks helps jump-start projects. Tried and tested methodology ensures that we follow a predictable, low - risk path to achieve results. Our track record is testimony to complex projects delivered within and evens before schedule. Expertise: Our teams combine cutting edge technology skills with rich domain expertise. What’s equally important - they share a strong customer orientation that means they actually start by listening to the customer. They’re focused on coming up with solutions that serve customer requirements today and anticipate future needs. A full service portfolio: They offer customers the advantage of being able to Architect, integrate and manage technology services. This means that they can rely on one, fully accountable source instead of trying to integrate disparate multi vendor solutions. Services: GSS is providing its services to Sain Medicaments Pvt. Ltd., Grace Drugs and Pharmaceuticals Pvt. Ltd., Alka Drugs and Pharmaceuticals Pvt. Ltd., Hitech Steels, Real Foods, Ravi Foods ,to name a few. With their rich expertise and experience in information technology they are in the best position to provide software solutions to distinct business requirements. INTRODUCTION TO THE PROJECT: Career Mart is a web application that provides a platform for candidates seeking job and the employers to share their needs. The candidates seeking job (referred as job seekers now onwards) can perform following operations: • Register with the web site. • Post their resume. • Modify their resume. • Search for job postings. • Browse searched job postings. • Add job posting to their favorites list. • Add frequently used searches to their favorites list. The employers can perform following operations: • Register with the web site.  • Enter profile of their company.  • Post one or more job postings.  • Modify the job postings.  • Search the resume database.  • Browse searched resumes.  • Add resumes to their favorites list. System: Modules: • Job Seeker • Employer • Admin Job Seeker Options Post Resumes This option allows job seekers to post their resumes. One candidate can store only one resume in the database.  Search Jobs Using Search Jobs option job seekers can search for available job opportunities from the database. My Favorites When you search for jobs the results can be stored as your favorites jobs in the database. In addition you can also save the search criteria as your favorite search criteria. Employer Options Post Jobs Using this option employers can post job opportunities in the database. These opportunities can be searched by the job seekers. Search Resumes Employers can search available resume database through this option. Company Profile Employers can also specify the profile on the company. My Favorites When you search for resumes the search results can be saved as your favorites in the database Administrative Options Education Levels Using this option administrator can add/edit/delete education levels. Experience Levels Using this option administrator can add/edit/delete experience levels. SYSTEM ANALYSIS Definition and reason for Condition Analysis System analysis will be performed to determine if it is feasible to design information based on policies and plans of the organization and on user requirements and to eliminate the weaknesses of the present system. General requirements are: - 1. The new system should be cost effective. 2. To augment management, improve productivity and services. 3. To enhance User/System interface. 4. To improve information qualify and usability. 5. To upgrade system’s reliability, availability, flexibility and growth potential. IDENTIFICATION OF NEED Career Mart maintains information about the different job providers as well as the job seekers. It notifies every job seeker with the availability of the job as per the category in which the job seeker has registered user’s resume. The system also notifies the job provider with the information about the persons registered under the category required by the job provider. It also maintains a specialized search engine which provides instant availability of the jobs as the user’s category. The system maintains information of the users who have registered with the site and every user can post multiple resumes in every category. The system helps the user in formulating the resume in proper manner. After searching the required job on the site the seekers can directly forward their resume to the corresponding email address listed in the search. This kind of functionality is again provided to the job provider who can instantly mail the candidates if one falls under their category. Further Drawbacks of the Existing System: The following are the drawbacks of the existing manual System. Time Delay: In the existing system, information related to all transactions is stored in different registers. Since all the transactions are stored in different registers it takes lot of time to prepare different reports. Redundancy: As the information passes through different registers, each register is consolidated and sent to next register. So the same information is being tabulated at each register, which involves lot of complication and duplication in work, thus it causes redundancy. Accuracy: Since the same data is compiled at different sections, the possibility of tabulating data wrongly increases. Also if the data is more, validations become difficult. This may result in loss of accuracy of data. Information Retrieval: As the information is stored in the particular Format, it can only be retrieved in the same format. But if it is to be retrieve in different format, it is not possible. Storage Media: In the existing system, data transaction being stored on too long registers it is very difficult to refer after some time. Reports: At the various reports are tabulated manually. They are not such Attractive and require more time. They do not provide adequate help in maintaining the accounts. Enquiry: Enquiry for different level of information is much more difficult. On line enquiry of data is not possible. FEASIBILITY STUDY TECHINICAL FEASIBILITY: Evaluating the technical feasibility is the trickiest part of a feasibility study. This is because, at this point in time, not too many detailed design of the system, making it difficult to access issues like performance, costs on (on account of the kind of technology to be deployed) etc. A number of issues have to be considered while doing a technical analysis. i) Understand the different technologies involved in the proposed system: Before commencing the project, we have to be very clear about what are the technologies that are to be required for the development of the new system. ii) Find out whether the organization currently possesses the required technologies: Is the required technology available with the organization' If so is the capacity sufficient' For instance – “Will the current printer be able to handle the new reports and forms required for the new system'” OPERATIONAL FEASIBILITY: Proposed projects are beneficial only if they can be turned into information systems that will meet the organizations operating requirements. Simply stated, this test of feasibility asks if the system will work when it is developed and installed. Are there major barriers to Implementation' Here are questions that will help test the operational feasibility of a project: • Is there sufficient support for the project from management from users' If the current system is well liked and used to the extent that persons will not be able to see reasons for change, there may be resistance. • Are the current business methods acceptable to the user' If they are not, Users may welcome a change that will bring about a more operational and useful systems. • Have the user been involved in the planning and development of the project' Early involvement reduces the chances of resistance to the system and in General and increases the likelihood of successful project. Since the proposed system was to help reduce the hardships encountered In the existing manual system, the new system was considered to be operational feasible. ECONOMIC FEASIBILITY: Economic feasibility attempts 2 weigh the costs of developing and implementing a new system, against the benefits that would accrue from having the new system in place. This feasibility study gives the top management the economic justification for the new system. A simple economic analysis which gives the actual comparison of costs and benefits are much more meaningful in this case. In addition, this proves to be a useful point of reference to compare actual costs as the project progresses. There could be various types of intangible benefits on account of automation. These could include increased customer satisfaction, improvement in product quality better decision making timeliness of information, expediting activities, improved accuracy of operations, better documentation and record keeping, faster retrieval of information, better employee morale. SOFTWARE REQUIREMENT SPECIFICATION REQUIREMENT SPECIFICATION: The software, Career Mar twhich is designed for administrating & automating all the major activities that are carried out for a job seekers and job providers. INTRODUCTION Purpose: The main purpose for preparing this document is to give a general insight into the analysis and requirements of the existing system or situation and for determining the operating characteristics of the system. Scope: This Document plays a vital role in the development life cycle (SDLC) As it describes the complete requirement of the system. It is meant for use by the developers and will be the basic during testing phase. Any changes made to the requirements in the future will have to go through formal change approval process. Developers Responsibilities Overview: The developer is responsible for: 1) Developing the system, which meets the SRS and solving all the requirements of the system' 2) Demonstrating the system and installing the system at client's location after the acceptance testing is successful. 3) Submitting the required user manual describing the system interfaces to work on it and also the documents of the system. 4) Conducting any user training that might be needed for using the system. 5) Maintaining the system for a period of one year after installation. Functional Requirements: Inputs: The major inputs for Career Martcan be categorized module -wise. Basically all the information is managed by the software and in order to access the information one has to produce one's identity by entering the user-id and password. Every user has their own domain of access beyond which the access is dynamically refrained rather denied. Output: The major outputs of the system are tables and reports. Tables are created dynamically to meet the requirements on demand. Reports, as it is obvious, carry the gist of the whole information that flows across the institution. This application must be able to produce output at different modules for different inputs. Performance Requirements: Performance is measured in terms of reports generated weekly and monthly. SOFTWARE AND HARDWARE SPECIFICATIONS Hardware: Processor : Intel Pentium or more Ram : 256 MB or more Cache : 512 KB Hard disk : 16 GB hard disk recommended for primary partition. Software: Operating system : Windows 2000 or later Front End Software : ASP.NET (C# .NET) Back End Software : Sqlserver 2005 TOOLS, PLATFORM/LANGUAGES USED SELECTED SOFTWARE Microsoft.NET Framework The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives: • To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely. • To provide a code-execution environment that minimizes software deployment and versioning conflicts. • To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party. • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments. • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications. • To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code. The .NET Framework has two main components: the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services. The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts. For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable Web Forms applications and XML Web services, both of which are discussed later in this topic. Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and secure isolated file storage. The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture. Features of the Common Language Runtime The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime. With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application. The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich. The runtime also enforces code robustness by implementing a strict type- and code-verification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety. In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references. The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications. While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs. The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory locality-of-reference to further increase performance. Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft® SQL Server™ and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting. .NET Framework Class Library The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework. For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework. As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, you can use the .NET Framework to develop the following types of applications and services: • Console applications. • Scripted or hosted applications. • Windows GUI applications (Windows Forms). • ASP.NET applications. • XML Web services. • Windows services. For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes. Client Application Development Client applications are the closest to a traditional style of application in Windows-based programming. These are the types of applications that display windows or forms on the desktop, enabling a user to perform a task. Client applications include applications such as word processors and spreadsheets, as well as custom business applications such as data-entry tools, reporting tools, and so on. Client applications usually employ windows, menus, buttons, and other GUI elements, and they likely access local resources such as the file system and peripherals such as printers. Another kind of client application is the traditional ActiveX control (now replaced by the managed Windows Forms control) deployed over the Internet as a Web page. This application is much like other client applications: it is executed natively, has access to local resources, and includes graphical elements. In the past, developers created such applications using C/C++ in conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD) environment such as Microsoft® Visual Basic®. The .NET Framework incorporates aspects of these existing products into a single, consistent development environment that drastically simplifies the development of client applications. The Windows Forms classes contained in the .NET Framework are designed to be used for GUI development. You can easily create command windows, buttons, menus, toolbars, and other screen elements with the flexibility necessary to accommodate shifting business needs. For example, the .NET Framework provides simple properties to adjust visual attributes associated with forms. In some cases the underlying operating system does not support changing these attributes directly, and in these cases the .NET Framework automatically recreates the forms. This is one of many ways in which the .NET Framework integrates the developer interface, making coding simpler and more consistent. Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's computer. This means that binary or natively executing code can access some of the resources on the user's system (such as GUI elements and limited file access) without being able to access or compromise other resources. Because of code access security, many applications that once needed to be installed on a user's system can now be safely deployed through the Web. Your applications can implement the features of a local application while being deployed like a Web page. Server Application Development Server-side applications in the managed world are implemented through runtime hosts. Unmanaged applications host the common language runtime, which allows your custom managed code to control the behavior of the server. This model provides you with all the features of the common language runtime and class library while gaining the performance and scalability of the host server. The following illustration shows a basic network schema with managed code running in different server environments. Servers such as IIS and SQL Server can perform standard operations while your application logic executes through the managed code. Server-side managed code ASP.NET is the hosting environment that enables developers to use the .NET Framework to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a complete architecture for developing Web sites and Internet-distributed objects using managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing mechanism for applications, and both have a collection of supporting classes in the .NET Framework. XML Web services, an important evolution in Web-based technology, are distributed, server-side application components similar to common Web sites. However, unlike Web-based applications, XML Web services components have no UI and are not targeted for browsers such as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable software components designed to be consumed by other applications, such as traditional client applications, Web-based applications, or even other XML Web services. As a result, XML Web services technology is rapidly moving application development and deployment into the highly distributed environment of the Internet. If you have used earlier versions of ASP technology, you will immediately notice the improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms pages in any language that supports the .NET Framework. In addition, your code no longer needs to share the same file with your HTTP text (although it can continue to do so if you prefer). Web Forms pages execute in native machine language because, like any other managed application, they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted and interpreted. ASP.NET pages are faster, more functional, and easier to develop than unmanaged ASP pages because they interact with the runtime like any managed application. The .NET Framework also provides a collection of classes and tools to aid in development and consumption of XML Web services applications. XML Web services are built on standards such as SOAP (a remote procedure-call protocol), XML (an extensible data format), and WSDL ( the Web Services Description Language). The .NET Framework is built on these standards to promote interoperability with non-Microsoft solutions. For example, the Web Services Description Language tool included with the .NET Framework SDK can query an XML Web service published on the Web, parse its WSDL description, and produce C# or Visual Basic source code that your application can use to become a client of the XML Web service. The source code can create classes derived from classes in the class library that handle all the underlying communication using SOAP and XML parsing. Although you can use the class library to consume XML Web services directly, the Web Services Description Language tool and the other tools contained in the SDK facilitate your development efforts with the .NET Framework. If you develop and publish your own XML Web service, the .NET Framework provides a set of classes that conform to all the underlying communication standards, such as SOAP, WSDL, and XML. Using those classes enables you to focus on the logic of your service, without concerning yourself with the communications infrastructure required by distributed software development. Finally, like Web Forms pages in the managed environment, your XML Web service will run with the speed of native machine language using the scalable communication of IIS. Active Server Pages.NET ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. ASP.NET offers several important advantages over previous Web development models: • Enhanced Performance. ASP.NET is compiled common language runtime code running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early binding, just-in-time compilation, native optimization, and caching services right out of the box. This amounts to dramatically better performance before you ever write a line of code. • World-Class Tool Support. The ASP.NET framework is complemented by a rich toolbox and designer in the Visual Studio integrated development environment. WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a few of the features this powerful tool provides. • Power and Flexibility. Because ASP.NET is based on the common language runtime, the power and flexibility of that entire platform is available to Web application developers. The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly accessible from the Web. ASP.NET is also language-independent, so you can choose the language that best applies to your application or partition your application across many languages. Further, common language runtime interoperability guarantees that your existing investment in COM-based development is preserved when migrating to ASP.NET. • Simplicity. ASP.NET makes it easy to perform common tasks, from simple form submission and client authentication to deployment and site configuration. For example, the ASP.NET page framework allows you to build user interfaces that cleanly separate application logic from presentation code and to handle events in a simple, Visual Basic - like forms processing model. Additionally, the common language runtime simplifies development, with managed code services such as automatic reference counting and garbage collection. • Manageability. ASP.NET employs a text-based, hierarchical configuration system, which simplifies applying settings to your server environment and Web applications. Because configuration information is stored as plain text, new settings may be applied without the aid of local administration tools. This "zero local administration" philosophy extends to deploying ASP.NET Framework applications as well. An ASP.NET Framework application is deployed to a server simply by copying the necessary files to the server. No server restart is required, even to deploy or replace running compiled code. • Scalability and Availability. ASP.NET has been designed with scalability in mind, with features specifically tailored to improve performance in clustered and multiprocessor environments. Further, processes are closely monitored and managed by the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its place, which helps keep your application constantly available to handle requests. • Customizability and Extensibility. ASP.NET delivers a well-factored architecture that allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to extend or replace any subcomponent of the ASP.NET runtime with your own custom-written component. Implementing custom authentication or state services has never been easier. • Security. With built in Windows authentication and per-application configuration, you can be assured that your applications are secure. Language Support The Microsoft .NET Platform currently offers built-in support for three languages: C#, Visual Basic, and JScript. What is ASP.NET Web Forms' The ASP.NET Web Forms page framework is a scalable common language runtime programming model that can be used on the server to dynamically generate Web pages. Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with existing pages), the ASP.NET Web Forms framework has been specifically designed to address a number of key deficiencies in the previous model. In particular, it provides: • The ability to create and use reusable UI controls that can encapsulate common functionality and thus reduce the amount of code that a page developer has to write. • The ability for developers to cleanly structure their page logic in an orderly fashion (not "spaghetti code"). • The ability for development tools to provide strong WYSIWYG design support for pages (existing ASP code is opaque to tools). ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework class. This class can then be used to dynamically process incoming requests. (Note that the .aspx file is compiled only the first time it is accessed; the compiled type instance is then reused across multiple requests). An ASP.NET page can be created simply by taking an existing HTML file and changing its file name extension to .aspx (no modification of code is required). For example, the following sample demonstrates a simple HTML page that collects a user's name and category preference and then performs a form postback to the originating page when a button is clicked: ASP.NET provides syntax compatibility with existing ASP pages. This includes support for code render blocks that can be intermixed with HTML content within an .aspx file. These code blocks execute in a top-down manner at page render time. Code-Behind Web Forms ASP.NET supports two methods of authoring dynamic pages. The first is the method shown in the preceding samples, where the page code is physically declared within the originating .aspx file. An alternative approach--known as the code-behind method--enables the page code to be more cleanly separated from the HTML content into an entirely separate file. Introduction to ASP.NET Server Controls In addition to (or instead of) using code blocks to program dynamic content, ASP.NET page developers can use ASP.NET server controls to program Web pages. Server controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a runat="server" attribute value. Intrinsic HTML tags are handled by one of the controls in the System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map to one of the controls is assigned the type of System.Web.UI.HtmlControls.HtmlGenericControl. Server controls automatically maintain any client-entered values between round trips to the server. This control state is not stored on the server (it is instead stored within an form field that is round-tripped between requests). Note also that no client-side script is required. In addition to supporting standard HTML input controls, ASP.NET enables developers to utilize richer custom controls on their pages. For example, the following sample demonstrates how the control can be used to dynamically display rotating ads on a page. 1. ASP.NET Web Forms provide an easy and powerful way to build dynamic Web UI. 2. ASP.NET Web Forms pages can target any browser client (there are no script library or cookie requirements). 3. ASP.NET Web Forms pages provide syntax compatibility with existing ASP pages. 4. ASP.NET server controls provide an easy way to encapsulate common functionality. 5. ASP.NET ships with 45 built-in server controls. Developers can also use controls built by third parties. 6. ASP.NET server controls can automatically project both uplevel and downlevel HTML. 7. ASP.NET templates provide an easy way to customize the look and feel of list server controls. 8. ASP.NET validation controls provide an easy way to do declarative client or server data validation. Crystal Reports  Crystal Reports for Visual Basic .NET is the standard reporting tool for Visual Basic.NET; it brings the ability to create interactive, presentation-quality content — which has been the strength of Crystal Reports for years — to the .NET platform. With Crystal Reports for Visual Basic.NET, you can host reports on Web and Windows platforms and publish Crystal reports as Report Web Services on a Web server. To present data to users, you could write code to loop through recordsets and print them inside your Windows or Web application. However, any work beyond basic formatting can be complicated: consolidations, multiple level totals, charting, and conditional formatting are difficult to program. With Crystal Reports for Visual Studio .NET, you can quickly create complex and professional-looking reports. Instead of coding, you use the Crystal Report Designer interface to create and format the report you need. The powerful Report Engine processes the formatting, grouping, and charting criteria you specify. Report Experts Using the Crystal Report Experts, you can quickly create reports based on your development needs: • Choose from report layout options ranging from standard reports to form letters, or build your own report from scratch. • Display charts that users can drill down on to view detailed report data. • Calculate summaries, subtotals, and percentages on grouped data. • Show TopN or BottomN results of data. • Conditionally format text and rotate text objects. ACTIVE X DATA OBJECTS.NET ADO.NET Overview ADO.NET is an evolution of the ADO data access model that directly addresses user requirements for developing scalable applications. It was designed specifically for the web with scalability, statelessness, and XML in mind. ADO.NET uses some ADO objects, such as the Connection and Command objects, and also introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and DataAdapter. The important distinction between this evolved stage of ADO.NET and previous data architectures is that there exists an object -- the DataSet -- that is separate and distinct from any data stores. Because of that, the DataSet functions as a standalone entity. You can think of the DataSet as an always disconnected recordset that knows nothing about the source or destination of the data it contains. Inside a DataSet, much like in a database, there are tables, columns, relationships, constraints, views, and so forth. A DataAdapter is the object that connects to the database to fill the DataSet. Then, it connects back to the database to update the data there, based on operations performed while the DataSet held the data. In the past, data processing has been primarily connection-based. Now, in an effort to make multi-tiered apps more efficient, data processing is turning to a message-based approach that revolves around chunks of information. At the center of this approach is the DataAdapter, which provides a bridge to retrieve and save data between a DataSet and its source data store. It accomplishes this by means of requests to the appropriate SQL commands made against the data store. The XML-based DataSet object provides a consistent programming model that works with all models of data storage: flat, relational, and hierarchical. It does this by having no 'knowledge' of the source of its data, and by representing the data that it holds as collections and data types. No matter what the source of the data within the DataSet is, it is manipulated through the same set of standard APIs exposed through the DataSet and its subordinate objects. While the DataSet has no knowledge of the source of its data, the managed provider has detailed and specific information. The role of the managed provider is to connect, fill, and persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command, Connection, DataReader and DataAdapter. In the remaining sections of this document, we'll walk through each part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are, and how to program against them. The following sections will introduce you to some objects that have evolved, and some that are new. These objects are: • Connections. For connection to and managing transactions against a database. • Commands. For issuing SQL commands against a database. • DataReaders. For reading a forward-only stream of data records from a SQL Server data source. • DataSets. For storing, remoting and programming against flat data, XML data and relational data. • DataAdapters. For pushing data into a DataSet, and reconciling data against a database. When dealing with connections to a database, there are two different options: SQL Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider (System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider. These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath). Connections Connections are used to 'talk to' databases, and are respresented by provider-specific classes such as SQLConnection. Commands travel over connections and resultsets are returned in the form of streams which can be read by a DataReader object, or pushed into a DataSet object. Commands Commands contain the information that is submitted to a database, and are represented by provider-specific classes such as SQLCommand. A command can be a stored procedure call, an UPDATE statement, or a statement that returns results. You can also use input and output parameters, and return values as part of your command syntax. The example below shows how to issue an INSERT statement against the Northwind database. DataReaders The DataReader object is somewhat synonymous with a read-only/forward-only cursor over data. The DataReader API supports flat as well as hierarchical data. A DataReader object is returned after executing a command against a database. The format of the returned DataReader object is different from a recordset. For example, you might use the DataReader to show the results of a search list in a web page. DataSets and DataAdapters DataSets The DataSet object is similar to the ADO Recordset object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with database-like structures such as tables, columns, relationships, and constraints. However, though a DataSet can and does behave much like a database, it is important to remember that DataSet objects do not interact directly with databases, or other source data. This allows the developer to work with a programming model that is always consistent, regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input can all be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The GetChanges method of the DataSet object actually creates a second DatSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other objects) to update the original data source. The DataSet has many XML characteristics, including the ability to produce and consume XML data and XML schemas. XML schemas can be used to describe schemas interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled for type safety and statement completion. DataAdapters (OLEDB/SQL) The DataAdapter object works as a bridge between the DataSet and the source data. Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection) can increase overall performance when working with a Microsoft SQL Server databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection objects. The DataAdapter object uses commands to update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can explicitly set these commands in order to control the statements used at runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon a select statement. However, this run-time generation requires an extra round-trip to the server in order to gather required metadata, so explicitly providing the INSERT, UPDATE, and DELETE commands at design time will result in better run-time performance. 1. ADO.NET is the next evolution of ADO for the .Net Framework. 2. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new objects, the DataSet and Data Adapter, are provided for these scenarios. 3. ADO.NET can be used to get data from a stream, or to store data in a cache for updates. 4. There is a lot more information about ADO.NET in the documentation. 5. Remember, you can execute a command directly against the database in order to do inserts, updates, and deletes. You don't need to first put data into a DataSet in order to insert, update, or delete it. 6. Also, you can use a DataSet to bind to the data, move through the data, and navigate data relationships DATABASE A database management, or DBMS, gives the user access to their data and helps them transform the data into information. Such database management systems include dBase, paradox, IMS, and Sqlserver. These systems allow users to create, update and extract information from their database. A database is a structured collection of data. Data refers to the characteristics of people, things and events. Sqlserver stores each data item in its own fields. In Sqlserver, the fields relating to a particular person, thing or event are bundled together to form a single complete unit of data, called a record (it can also be referred to as raw or an occurrence). Each record is made up of a number of fields. No two fields in a record can have the same field name. During an Sqlserver Database design project, the analysis of your business needs identifies all the fields or attributes of interest. If your business needs change over time, you define any additional fields or change the definition of existing fields. Primary Key Every table in Sqlserver has a field or a combination of fields that uniquely identifies each record in the table. The Unique identifier is called the Primary Key, or simply the Key. The primary key provides the means to distinguish one record from all other in a table. It allows the user and the database system to identify, locate and refer to one particular record in the database. Relational Database Sometimes all the information of interest to a business operation can be stored in one table. Sqlserver makes it very easy to link the data in multiple tables. Matching an employee to the department in which they work is one example. This is what makes Sqlserver a relational database management system, or RDBMS. It stores data in two or more tables and enables you to define relationships between the table and enables you to define relationships between the tables. Foreign Key When a field is one table matches the primary key of another field is referred to as a foreign key. A foreign key is a field or a group of fields in one table whose values match those of the primary key of another table. Referential Integrity Not only does Sqlserver allow you to link multiple tables, it also maintains consistency between them. Ensuring that the data among related tables is correctly matched is referred to as maintaining referential integrity. Data Abstraction A major prupose of a database system is to provide users with an abstract view of the data. This system hides certain details of how the data is stored and maintained. Data abstraction is divided into three levels. Physical level: This is the lowest level of abstraction at which one describes how the data are actually stored. Conceptual Level: At this level of database abstraction all the attributed and what data are actually stored is described and entries and relationship among them. View level: This is the highest level of abstraction at which one describes only part of the database. Advantages of RDBMS Redundancy can be avoided Inconsistency can be eliminated Data can be Shared Standards can be enforced Security restrictions ca be applied Integrity can be maintained Conflicting requirements can be balanced Data independence can be achieved. Disadvantages of DBMS A significant disadvantage of the DBMS system is cost. In addition to the cost of purchasing of developing the software, the hardware has to be upgraded to allow for the extensive programs and the workspace required for their execution and storage. While centralization reduces duplication, the lack of duplication requires that the database be adequately backed up so that in case of failure the data can be recovered. FEATURES OF SQLSERVER (RDBMS) SQLSERVER is the leading database management system (DBMS) because it is the only Database that meets the uncompromising requirements of today’s most demanding information systems. From complex decision support systems (DSS) to the most rigorous online transaction processing (OLTP) application, even application that require simultaneous DSS and OLTP access to the same critical data, Sqlserver leads the industry in both performance and capability SQLSERVER is a truly portable, distributed, and open DBMS that delivers unmatched performance, continuous operation and support for every database. SQLSERVER RDBMS is high performance fault tolerant DBMS which is specially designed for online transactions processing and for handling large database application. SQLSERVER with transactions processing option offers two features which contribute to very high level of transaction processing throughput, which are The row level lock manager PL/SQL a procedural language extension to SQL Enterprise wide Data Sharing The unrivaled portability and connectivity of the SQLSERVER DBMS enables all the systems in the organization to be linked into a singular, integrated computing resource. Portability SQLSERVER is fully portable to more than 80 distinct hardware and operating systems platforms, including UNIX, MSDOS, OS/2, Macintosh and dozens of proprietary platforms. This portability gives complete freedom to choose the database sever platform that meets the system requirements. Open Systems SQLSERVER offers a leading implementation of industry –standard SQL. Sqlserver’s open architecture integrates SQLSERVER and non –SQLSERVER DBMS with industries most comprehensive collection of tools, application, and third party software products Sqlserver’s Open architecture provides transparent access to data from other relational database and even non-relational database. Distributed Data Sharing Sqlserver’s networking and distributed database capabilities to access data stored on remote server with the same ease as if the information was stored on a single local computer. A single SQL statement can access data at multiple sites. You can store data where system requirements such as performance, security or availability dictate. Unmatched Performance The most advanced architecture in the industry allows the SQLSERVER DBMS to deliver unmatched performance. Sophisticated Concurrency Control Real World applications demand access to critical data. With most database Systems application becomes “contention bound” – which performance is limited not by the CPU power or by disk I/O, but user waiting on one another for data access. Sqlserver employs full, unrestricted row-level locking and contention free queries to minimize and in many cases entirely eliminates contention wait times. No I/O Bottlenecks Sqlserver’s fast commit groups commit and deferred write technologies dramatically reduce disk I/O bottlenecks. While some database write whole data block to disk at commit time, Sqlserver commits transactions with at most sequential log file on disk at commit time, On high throughput systems, one sequential writes typically group commit multiple transactions. Data read by the transaction remains as shared memory so that other transactions may access that data without reading it again from disk. Since fast commits write all data necessary to the recovery to the log file, modified blocks are written back to the database independently of the transaction commit, when written from memory to disk. SQL * NET This is Sqlserver’s networking software, which interfaces between SQLSERVER and the OS networking protocol. SQL * NET enables the integration of diverse, OS, database, communication protocols and application to create a unified computing information resource. Application Development Tools SQL * Plus This is the primary interface to the SQLSERVER RDBMS. It provides a powerful environment for querying, defining and controlling data. Based on a full implementation of ANSI standard SQL, it also provides a rich set of extensions in PL/SQL, another data manipulation language SQL * MENU It is a development tool for creating menu-based applications. It can also tie together Sqlserver and non- – Sqlserver applications into a fully integrated environment. SQL * REPORTWRITER It is an advanced report generation tool, which is a non-procedural application development tool. It’s powerful formatting capabilities and fill-in-the form interface allows the user to develop complex reports without resource to extensive programming SYSTEM DESIGN SOFTWARE ENGINEERING PARADIGM APPLIED- (RAD-MODEL) The two design objectives continuously sought by developers are reliability and maintenance. Reliable System There are two levels of reliability. The first is meeting the right requirements. A careful and through systems study is needed to satisfy this aspect of reliability. The second level of systems reliability involves the actual working delivered to the user. At this level, the systems reliability is interwoven with software engineering and development. There are three approaches to reliability. 1. Error avoidance: Prevents errors from occurring in software. 2. Error detection and correction: In this approach errors are recognized whenever they are encountered and correcting the error by effect of error, of the system does not fail. 3. Error tolerance: In this approach errors are recognized whenever they occur, but enable the system to keep running through degraded perform or by applying values that instruct the system to continue process. Maintenance: The key to reducing need for maintenance, while working, if possible to do essential tasks. 1. More accurately defining user requirement during system development. 2. Assembling better systems documentation. 3. Using more effective methods for designing, processing, login and communicating information with project team members. 4. Making better use of existing tools and techniques. 5. Managing system engineering process effectively. Output Design: One of the most important factors of an information system for the user is the output the system produces. Without the quality of the output, the entire system may appear unnecessary that will make us avoid using it possibly causing it to fail. Designing the output should process the in an organized well throughout the manner. The right output must be developed while ensuring that each output element is designed so that people will find the system easy to use effectively. The term output applying to information produced by an information system whether printed or displayed while designing the output we should identify the specific output that is needed to information requirements select a method to present the formation and create a document report or other formats that contains produced by the system. Types of output: Whether the output is formatted report or a simple listing of the contents of a file, a computer process will produce the output. • A Report • A Document • A Message • Retrieval from a data store • Transmission from a process or system activity • Directly from an output sources Layout Design: It is an arrangement of items on the output medium. The layouts are building a mock up of the actual reports or document, as it will appear after the system is in operation. The output layout has been designated to cover information. The outputs are presented in the appendix. Input design and control: Input specifications describe the manner in which data enter the system for processing. Input design features will ensure the reliability of the systems and produce results from accurate data, or thus can be result in the production of erroneous information. The input design also determines whenever the user can interact efficiently with this system. Objectives of input design: Input design consists of developing specifications and procedures for data preparation, the steps necessary to put transaction data into a usable from for processing and data entry, the activity of data into the computer processing. The five objectives of input design are: • Controlling the amount of input • Avoiding delay • Avoiding error in data • Avoiding extra steps • Keeping the process simple Controlling the amount of input: Data preparation and data entry operation depend on people, Because labour costs are high, the cost of preparing and entering data is also high. Reducing data requirement expense. By reducing input requirement the speed of entire process from data capturing to processing to provide results to users. Avoiding delay: The processing delay resulting from data preparation or data entry operations is called bottlenecks. Avoiding bottlenecks should be one objective of input. Avoiding errors: Through input validation we control the errors in the input data. Avoiding extra steps: The designer should avoid the input design that cause extra steps in processing saving or adding a single step in large number of transactions saves a lot of processing time or takes more time to process. Keeping process simple: If controls are more people may feel difficult in using the systems. The best-designed system fits the people who use it in a way that is comfortable for them. NORMALIZATION It is a process of converting a relation to a standard form. The process is used to handle the problems that can arise due to data redundancy i.e. repetition of data in the database, maintain data integrity as well as handling problems that can arise due to insertion, updation, deletion anomalies. Decomposing is the process of splitting relations into multiple relations to eliminate anomalies and maintain anomalies and maintain data integrity. To do this we use normal forms or rules for structuring relation. Insertion anomaly: Inability to add data to the database due to absence of other data. Deletion anomaly: Unintended loss of data due to deletion of other data. Update anomaly: Data inconsistency resulting from data redundancy and partial update Normal Forms: These are the rules for structuring relations that eliminate anomalies. First Normal Form: A relation is said to be in first normal form if the values in the relation are atomic for every attribute in the relation. By this we mean simply that no attribute value can be a set of values or, as it is sometimes expressed, a repeating group. Second Normal Form: A relation is said to be in second Normal form is it is in first normal form and it should satisfy any one of the following rules. 1) Primary key is a not a composite primary key 2) No non key attributes are present 3) Every non key attribute is fully functionally dependent on full set of primary key. Third Normal Form: A relation is said to be in third normal form if their exits no transitive dependencies. Transitive Dependency: If two non key attributes depend on each other as well as on the primary key then they are said to be transitively dependent. The above normalization principles were applied to decompose the data in multiple table thereby making the data to be maintained in a consistent state. Data Dictionary After carefully understanding the requirements of the client the the entire data storage requirements are divided into tables. The below tables are normalized to avoid any anomalies during the course of data entry. Table name:Companies |Colname |Datatype |Constraints | |CompanyID |Int |Primarykey | |Username |varchar(50) | | |CompanyName |varchar(255) | | |Address1 |varchar(255) | | |Address2 |varchar(255) | | |City |varchar(50) | | |StateID |int | | |CountryID |int | | |Zip |varchar(50) | | |Phone |varchar(50) | | |Fax |varchar(50) | | |CompanyEmail |varchar(255) | | |WebSiteUrl |varchar(255) | | |CompanyProfile |text | | Countries: |Colname |Datatype |Constraints | |CountryID |Int |Primarykey | |CountryName |varchar(255) | | EducationLevels: |Colname |Datatype |Constraints | |EducationLevelId |Int |Primarykey | |EducationLevelName |varchar(50) | | ExperienceLevels: |Colname |Datatype |Constraints | |ExperienceLevelID |Int |Primarykey | |ExperienceLevelname |varchar(255) | | JobPostings: |Colname |Datatype |Constraints | |PostingID |Int |Primarykey | |CompanyID |Int | | |ContactPerson |varchar(255) | | |Title |varchar(255) | | |Department |varchar(50) | | |JobCode |varchar(50) | | |City |Varchar(50) | | |StateID |int | | |CountryID |int | | |EducationLevelID |int | | |JobTypeID |int | | |MinSalary |money | | |MaxSalary |money | | |JobDescription |text | | |PostingDate |smalldatetime | | |PostedBy |Varchar(50) | | |CategoryID |int | | JobTypes: |Colname |Datatype |Constraints | |JobTypeID |Int |Primarykey | |JobTypeName |varchar(50) | | MyJobs: |Colname |Datatype |Constraints | |MyJobID |Int |Primarykey | |PostingID |varchar(50) | | |UserName |varchar(50) | | |CreatedDate |datetime | | MyResumes: |Colname |Datatype |Constraints | |MyResumeID |Int |Primarykey | |ResumeID |int | | |UserName |varchar(50) | | |CreatedDate |datetime | | MySearches: |Colname |Datatype |Constraints | |MySearchID |Int |Primarykey | |SearchCriteria |Varchar(255) | | |CountryID |int | | |StateID |int | | |City |varchar(50) | | |UserName |varchar(50) | | |PostDate |datetime | | Resumes: |Colname |Datatype |Constraints | |ResumeID |int | | |UserName |Varchar(50) | | |JobTitle |Varchar(255) | | |TargetCity |Varchar(50) | | |TargetStateID |int | | |TargetCountryID |int | | |RelocationCountryID |int | | |TargetJobTypeID |int | | |EducationLevelID |int | | |ResumeText |text | | |CoverLetterText |text | | |CategoryID |int | | |SubcategoryID |int | | |IsSearchable |Char(1) | | |PostDate |datetime | | States: |Colname |Datatype |Constraints | |StateID |Int |Primarykey | |CountryID |int | | |StateName |varchar(255) | | RELATIONSHIP DIAGRAM E-R DIAGRAMS [pic] DATA FLOW DIAGRAM: A data flow diagram is graphical tool used to describe and analyze movement of data through a system. These are the central tool and the basis from which the other components are developed. The transformation of data from input to output, through processed, may be described logically and independently of physical components associated with the system. These are known as the logical data flow diagrams. The physical data flow diagrams show the actual implements and movement of data between people, departments and workstations. A full description of a system actually consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane and Sarson notation develops the data flow diagrams. Each component in a DFD is labeled with a descriptive name. Process is further identified with a number that will be used for identification purpose. The development of DFD’s is done in several levels. Each process in lower level diagrams can be broken down into a more detailed DFD in the next level. The lop-level diagram is often called context diagram. It consists a single process bit, which plays vital role in studying the current system. The process in the context level diagram is exploded into other process at the first level DFD. The idea behind the explosion of a process into more process is that understanding at one level of detail is exploded into greater detail at the next level. This is done until further explosion is necessary and an adequate amount of detail is described for analyst to understand the process. Larry Constantine first developed the DFD as a way of expressing system requirements in a graphical from, this lead to the modular design. A DFD is also known as a “bubble Chart” has the purpose of clarifying system requirements and identifying major transformations that will become programs in system design. So it is the starting point of the design to the lowest level of detail. A DFD consists of a series of bubbles joined by data flows in the system. DFD SYMBOLS: In the DFD, there are four symbols 1. A square defines a source(originator) or destination of system data 2. An arrow identifies data flow. It is the pipeline through which the information flows 3. A circle or a bubble represents a process that transforms incoming data flow into outgoing data flows. 4. An open rectangle is a data store, data at rest or a temporary repository of data [pic]Process that transforms data flow. Source or Destination of data Data flow Data Store CONSTRUCTING DFD: Several rules of thumb are used in drawing DFD’s: 1. Process should be named and numbered for an easy reference. Each name should be representative of the process. 2. The direction of flow is from top to bottom and from left to right. Data Traditionally flow from source to the destination although they may flow back to the source. One way to indicate this is to draw long flow line back to a source. An alternative way is to repeat the source symbol as a destination. Since it is used more than once in the DFD it is marked with a short diagonal. 3. When a process is exploded into lower level details, they are numbered. 4. The names of data stores and destinations are written in capital letters. Process and dataflow names have the first letter of each work capitalized A DFD typically shows the minimum contents of data store. Each data store should contain all the data elements that flow in and out. Questionnaires should contain all the data elements that flow in and out. Missing interfaces redundancies and like is then accounted for often through interviews. SAILENT FEATURES OF DFD’s 1. The DFD shows flow of data, not of control loops and decision are controlled considerations do not appear on a DFD. 2. The DFD does not indicate the time factor involved in any process whether the dataflows take place daily, weekly, monthly or yearly. 3. The sequence of events is not brought out on the DFD. TYPES OF DATA FLOW DIAGRAMS 1. Current Physical 2. Current Logical 3. New Logical 4. New Physical CURRENT PHYSICAL: In Current Physical DFD proecess label include the name of people or their positions or the names of computer systems that might provide some of the overall system-processing label includes an identification of the technology used to process the data. Similarly data flows and data stores are often labels with the names of the actual physical media on which data are stored such as file folders, computer files, business forms or computer tapes. CURRENT LOGICAL: The physical aspects at the system are removed as mush as possible so that the current system is reduced to its essence to the data and the processors that transform them regardless of actual physical form. NEW LOGICAL: This is exactly like a current logical model if the user were completely happy with he user were completely happy with the functionality of the current system but had problems with how it was implemented typically through the new logical model will differ from current logical model while having additional functions, absolute function removal and inefficient flows recognized. NEW PHYSICAL: The new physical represents only the physical implementation of the new system. RULES GOVERNING THE DFD’S PROCESS 1) No process can have only outputs. 2) No process can have only inputs. If an object has only inputs than it must be a sink. 3) A process has a verb phrase label. DATA STORE 1) Data cannot move directly from one data store to another data store, a process must move data. 2) Data cannot move directly from an outside source to a data store, a process, which receives, must move data from the source and place the data into data store 3) A data store has a noun phrase label. SOURCE OR SINK The origin and /or destination of data. 1) Data cannot move direly from a source to sink it must be moved by a process 2) A source and /or sink has a noun phrase land DATA FLOW 1) A Data Flow has only one direction of flow between symbol. It may flow in both directions between a process and a data store to show a read before an update. The later is usually indicated however by two separate arrows since these happen at different type. 2) A join in DFD means that exactly the same data comes from any of two or more different processes data store or sink to a common location. 3) A data flow cannot go directly back to the same process it leads. There must be atleast one other process that handles the data flow produce some other data flow returns the original data into the beginning process. 4) A Data flow to a data store means update ( delete or change). 5) A data Flow from a data store means retrieve or use. 6) A data flow has a noun phrase label more than one data flow noun phrase can appear on a single arrow as long as all of the flows on the same arrow move together as one package. Career Mart Data Flow Diagrams Request Service Provide Service 1) Job Seeker Registration Jobseekers DB 2) Job Seeker Login Job Seeker Authentication Job Seeker Login Details Data base Authenticated User Authentication Failed 3) Post / Edit Resumes: Post/ Edit Store/ Edit Resumes Resume Resume Details Database 4) Search for Jobs: Get Details Job specifications Jobs Database of jobs 5) Add Jobs to Favorites: Required Job Postings Add Jobs Stored to database Favorite Jobs 6) Job Provider Registration: Job Provider Details Job Provider Registered Stored to Data Base Job Provider Database 7) Posting Jobs by Job Provider Job Profile Store Job Postings Stored to jobs Database Jobs Data Base 8) Search Resume Data Bases Qualifications Search Resumes Resume Data Base Selected Resumes Favorite List Add To Favorites 9) Administrator Login: Job Seeker Authentication Admin Login Details Data base Authenticated User Authentication Failed 10) Edit / Delete Education Levels: Education level edited / deleted Details Levels Edit/ Delete Education Level Data Base 11) Edit/ Delete Experience Levels: Experience level edited / deleted Details Levels Edit/ Delete Experience Level Data Base OUTPUT SCREENS [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] [pic] TESTING SYSTEM TESTING During system testing the system is used experimentally used to ensure that the software does not fail, i.e., it will run according to its specification and in the way the users expect. Special test data are input for processing and the results examined. A limited number of users may be allowed to use the system to see whether they try to use it in unforeseen ways. It is preferable to discover any surprises before the organization implements the system. Implementation and Evaluation Implementations is the process of having systems personnel checkout and put new equipment into use, train users, installs the new application and construct any files of data needed to use it. Evaluation of the system is performed to identify its strengths and weakness. The actual evaluation can occur along any of the following dimensions. Operation Evaluation: Assessment of the manner in which the system functions, including ease of use, response time, suitability of information formats, overall reliability and level of utilization. Operational Impact: Identification and measurement of benefits to the organization in such areas as financial concerns operational efficiency, and competitive impact. Includes impact on internal and external information flows. User Manager Assessment: Evaluation of the attitudes of senior and user managers within the organization, as well as end-users. Development Performance Evaluation of the development process in accordance with such yardsticks as overall development time and effort, conformance to budgets and standards, and other project management criteria. SYSTEM IMPLEMENTATION System Implementation is used to bring a developed system or sub system into operational use and turning it over to the user. It involves programmer users and operational management. System Implementation components include: Personal Orientation: Introduce people to the new system and their relationship to the system Training: Give employees the tools and techniques to operate and use the system. Hardware Installation: Schedule for, prepare for, and then actually install new equipment. Procedure Writing: Develop procedure manual to follow in operating the new system. Testing: Ensure that the computer programs properly process the data. File Conversion: Load the information of the present files onto the new system files. Parallel Operation: Use the new system at the same time as the old to make sure results are correct. CONCLUSION The “JOB ENGINE” has been successfully completed. The goal of the system is achieved and problems are solved. The package is developed in a manner that it is user friendly and required help is provided at different levels. The project can be easily used in the process of decision making. Different types of reports can be generated which help the management to take correct decision and reduce the time delay which automatically increases the company’s work standards as well as the economical state of the company. This system never decreases the manpower but helps the development of available manpower and optimizes the manpower by which company’s standards and capabilities can be scaled to higher dimensions. FUTURE SCOPE OF THE PROJECT The project has met the standards required to work at Job Site. If the business logic remains same the project can be ported to any Job Site with minor changes in the working procedure of the project. The project can be used as an availability to develop a project for a different company with different business logic wherein the commonalties in certain areas remain the same at any business level. By using the common features in future development the development time as well as the cost of development can be decreased considerably. To modify the project to Dot net 2.0 and extending this functionality to mobile internet platform using mobile ASP.NET platform by which the restrictions of the software & hardware requirements can be scaled down, which is not possible using ASP.NET 1.0 BIBLOGRAPHY The following books were referred during the analysis and execution phase of the project SQLSERVER 8I THE COMPLETE REFERENCE By Sqlserver Press SOFTWARE ENGINEERING By Roger.S.Pressman Professional ASP.NET By Wrox MSDN 2002 By Microsoft ----------------------- [pic] Job Seeker Employers 0.0 Career Mart 1.0 Registration Job Seeker Registered Details 1.1 Login Action Job Seeker Login Add Jobs to Favorites Search for Jobs Post/ Edit Resumes 1.2 Resume Details Job Seekers Resumes 1.3 Search For Jobs Job Seekers 1.4 Add Jobs to Favorites Job Seekers Favorites list 2.0 Job Provider Registration Job Provider Registered Job Provider 2.1 Post Jobs Job Profiles Posted Job Providers 2.2 Search Resumes Job Providers Select Resumes 3.0 Login Action Job Seeker Login Edit/Delete Education Levels Edit/Delete Experience Levels 3.1 Edit/ Delete Education Levels Education Levels Administrator &[\c C W 3.2 Edit/ Delete Experience Levels Experience Levels Administrator
上一篇:Pttls_Assignment_4_-_Ground_Ru 下一篇:Professional_Knowledge_and_Abi