XML Latest Changes Are in Dissertation

Total Length: 2900 words ( 10 double-spaced pages)

Total Sources: 0

Page 1 of 10

The implications of security payloads and overheads on the performance of optimized XML networks (Choi, Wong, 2009) are inherent in the continual design of XML standards and protocols attempting to compress these elements and optimize their performance. The integration of security into Business Reporting Language (XBRL) is having a minimal impact on overall performance of XML networks overall, as the features in this standard are compressed (Piechocki, Felden, Graning, Debreceny, 2009). Compression is also specifically used with the XML key management specification to increase performance (Ekelhart, Fenz, Goluch, Steinkellner, Weippl, 2008). Compression algorithms are used for shrinking the contents of data containers, packets and messages so they are more compact, which increases transmission speed and accuracy. The development and continual support for XML within Web Services has transformed transactional workflows from being simplistic to be multifaceted, supporting the development of trading networks (Kangasharju, Lindholm, Tarkoma, 2008). As a result of Web Services getting used so much more because companies are choosing to use them for handling transactions with suppliers and customers, software developers are looking at how XML can be used to make Web Services more efficiently. A Web Service managing millions of transactions a day with suppliers and customers slows down to the point of sometimes not working. Software developers are looking at how to use XML as a means to spread the workload across several different instances or installations of the same Web Service so all the transactions can be completed quickly. Spreading out the workload across different Web Services installations is often called scalability (Warkentin, Johnston, 2006). Programming developing Web Services concentrate on how to make transaction workflows highly scalable so both the XML network and Web Service will be able to continue working even when millions of transactions a day are occurring.

The design objective of creating distributed order management systems that are scalable and safe enough to manage complex transactions is achievable with the current advances in AJAX and XML technologies. The concurrent design of XML-Based Intelligent Agent Protocol Design Frameworks that support role-based access and transactions can today have the potential to scale into trusted networks (Warkentin, Johnston, 2006). Many companies today are using Virtual Private Networks (VPN) to connect with their remote employees and also to secure their supply chain networks. A network that relies on VPN is a form of a trusted network ((Warkentin, Johnston, 2006). Trusted networks have a lot of potential because they provide a secure connection from one computer to another, often carrying confidential cost, price and customer data. VPN networks can run on top of TCP/IP and XML networks. Because VPN networks are compatible with XML and TCP/IP, companies are looking at how they can grow their distributed order management systems without losing performance or security.

XML is progressing rapidly to the point of being able to support secured, multi-based roles through secured private and public connections, as well (Warkentin, Johnston, 2006).

The use of HTML optimization routines and techniques that have shown initial performance gains over XML were tested only at the page level, given the fact that HTML is page-based as a development technology (Yang, Liao, Fang, 2007). Up to date there has been no research completed on the optimization of XML networks to support higher performance AJAX-based applications. This is one of the key objectives of this study, to determine how to optimize the performance of XML networks and AJAX applications to attain the highest levels of transaction efficiency and performance possible.

Studies and tests have shown however that HTML-based applications, when used in conjunction with XML, lack the inherent ability to be optimized due to the inherent limitations in the HTML technology. Attempts to optimize the performance of HTML have continued to be mixed in their results due to the page-based approach taken to defining content, navigation, and page structure (Choi, Wong, 2009). HTML's performance as a programming language is further reduced by the many scripting languages that lack the necessary and critical security upgrades needed to make them suitable for use in transaction-intensive networks. When all of these factors are taken into account it is clear that AJAX and optimized XML networks have significant upside potential for performance and improvement. The intent of this research is to measure the performance of AJAX applications on XML and TCP/IP networks.
Once measures of AJAX performance are completed on each network, it will be possible to determine how larger, more complex networks will perform. These larger, more complex networks are called exchanges, and often they have many different suppliers, buyers, and customers included in them. The performance of AJAX applications over TCP/IP and XML networks will provide insights into how these larger networks, called exchanges, will perform.

To attain the research objectives, it is necessary to concentrate on those parameters that best quantitatively measure the performance gains of XML networks and AJAX application performance. The XmlHttpRequest command is used to measure the relative speed and performance of the network. The XmlHttpRequest command is specifically used for requesting and delivering content of all types throughout an XML network. As this is a JavaScript-based command, it can also be used as part of a container-based metafile testing methodology which is used in the series of research efforts completed. The XmlHttpRequest command can also deliver or retrieve content, and therefore forms the foundation for an effective construct for measuring the performance of the network over time. This command is also used for transporting metafiles throughout the networks in a four-square test frame to normalize the specific interferences of the network transport, as well. Every attempt has been made to remove any item or factor that will detract from the accuracy of the research. This is why the network, operating systems and servers are all consistent. This is also why the decision was made to use the XmlHttpRequest command. This command not only manages the sending and receiving of content, it tracks network performance. As all networks have highly randomized resource loads, meaning they are very busy or slow depending on user's needs, it is important to also have this factor introduced. It had to be introduced randomly into the analysis however. That is why the XmlHttpRequest was also used, because it can have randomized packet sizes, which will make the network either very busy or slow.

How companies are using XML networks and the performance challenges they encounter form the foundation of this analysis. Specifically focusing on the use cases of enterprise-wide adoption of databases to support transaction systems, this dissertation has specifically defined randomized traffic flows both in duration and payload to determine how optimized XML networks would perform over an extended period of time. XmlHttpRequest is one of the commands that is a foundation of the XML command set. It carries transaction data within its containers and uses an index value of other networks to navigate to the system where the transaction data needs to go. To fully replicate a distributed order management environment, the use of a four-square-based test-bed has been devised in a closed-loop testing region. The use of AJAX-based applets for measuring the performance over the network regardless of payload and specifics of data transfer components in frames has also been taken into account in the methodology. This test was completed in a lab that did not have Internet access to any of the servers used in the testing. This was done to eliminate any Internet traffic that could potentially influence the results. It is a common practice in larger companies who have software engineering teams to do all development in labs where no Internet access is on the servers. This is done for security purposes, and to make sure the applications are not slowed down due to other services running on the servers that may access the Internet automatically. This was critically important, so the tests contained here would effectively replicate the performance of a private trading exchange (PTX). The PTX has become a standard framework for many companies who have extensive supply chains and sales channels. Proctor & Gamble (P&G) for example has one of the most diverse supply chains in the consumer packaged goods industry and has standardized on the PTX framework. P&G also has a diverse distribution channel that includes grocery stores and chains, mass merchandisers including Wal-Mart, Tesco and others, and packaged good wholesalers. The PTX framework is useful to P&G because it brings together.....

Show More ⇣


     Open the full completed essay and source list


OR

     Order a one-of-a-kind custom essay on this topic


sample essay writing service

Cite This Resource:

Latest APA Format (6th edition)

Copy Reference
"XML Latest Changes Are In" (2010, May 15) Retrieved June 29, 2024, from
https://www.aceyourpaper.com/essays/xml-latest-changes-2425

Latest MLA Format (8th edition)

Copy Reference
"XML Latest Changes Are In" 15 May 2010. Web.29 June. 2024. <
https://www.aceyourpaper.com/essays/xml-latest-changes-2425>

Latest Chicago Format (16th edition)

Copy Reference
"XML Latest Changes Are In", 15 May 2010, Accessed.29 June. 2024,
https://www.aceyourpaper.com/essays/xml-latest-changes-2425