Thursday, May 6, 2010

Pictuers Scooter Wheels

[Case Study] Oracle export -> import using Netcat

export Another common use is to move a scheme or schemes or even the entire database from one environment to another, for example, "board" (make a copy of the database) UAT environment with
copy of the production environment, ie run a export of the production database is generated the dump file (. dmp) and run an import in the database UAT. Sounds simple right? and it is if the database is relatively small, complications arise when the database is relatively large and the space on the filesystems is not enough to create the dump file.


Another not so common use is to use export / import to migrate a database or only those objects (schemas) of concern when it comes to a change of platform (operating system), ie if the base data is mounted on a 64-bit Linux server and mount interested in a server with Solaris (SPARC) is not possible to do this by copying only the database files from one server to another as is usually done when the servers are on the same platform The same applies when it comes to migrating from Solaris (SPARC) HP-UX (RISC or Itanium) or vice versa. Can not be done due to ENDI it takes for each platform.
Putting aside the discussion of why change from one platform to another and focusing attention on what are the points to consider (to migrate the database from one server to another) and to carry out in the shortest time possible, making the case that this is a production database is relatively large (greater than 1 Tb) and can not be offline for a long period of time, it becomes a challenge with many aspects. ideal solution, you look at the RMAN documentation that shows the matrix platforms for which you can create "transportable tablespaces" with the engine version of your current bd. If applicable, generate the corresponding transportable tablespaces, the loads on the bd of the new platform, low production database, you apply the archive logs generated in the production since it began bd generation of transportable tablespaces in the new bd platform, lift the new bd, ask them to do tests with the application points to the new db to validate that everything works properly and should not take more than a couple of hours from low production database until testing the application can be given as satisfactory and the db on the new platform now becomes the production bd. Of course there must be a process of validation prior to the plan is successful, and above all to have executed the plan as often as they were possible to refine all the details, resolve the errors presented, among many other points.

points against, what happens if you can not create "transportable tablespaces" between the two platforms, what happens if the platform permits (to create transportable tablespaces) but do not have space in the filesystems, what if the platform does not allow (create transportable tablespaces) and also do not have space in the filesystems. What happens if you have sufficient space is required to compress the backup to transfer over the network, what happens if the bandwidth in the network is not acceptable (100 Mbps), etc., etc., etc.

The variables in this direction make the process becomes complex and the best in this case is put on the table and consult with the company's ability to acquire these means (more space, gigabit network or better, etc..)

What if the company is not able to purchase any of these media (more space, gigabit network, etc.), not at this time perhaps more for reasons of budget will not buy it.

solution. Using * nix resources in combination with the export / import is possible to make direct migration database a database without having to generate the dump, where also the two processes are parallel, ie that since the export is launched, the output generated is transferred to IMMEDIATE as the server via the network that launched the import application data instantly. "Cool right?





What do you need?
Cookie (processing power) on both servers.

that the network is fast (at least Gigabit).
  • Optimize the database for import.
  • Advantages
If both servers have two or more processors or processors with cores, can be launched in parallel two, three, four or five processes export / import. Among many, you have a 300 Gb partitioned table which you can download such as three processes that take care of 100 Gb each.



compression. Better yet, what if instead of transferring 300 GB over the network, only a relatively minimal transferred? Assuming that the table in question contains only fields with common data types (not LOBs, LONGS and) the level of compression with gzip can be up to 80% or higher. That is, instead of transferring the 300 GB point to point standard gzip compression is applied to reduce the transfer to 55 GB or even better, that such if you apply the example of three processes, then would spread the load transfer at 18.3 Gb each. Super cool! Here comes into play the "cookie" on servers.


is necessary to optimize the database where you run the import, the Metalink note 93763.1 has just what you do. This optimization is only for the import, then to make sure that everything went well and before you make the change for the new bd bd be productive now, it will be necessary to return settings to their original values. In the example of the table with a size of 300 Gb, I did not take into account the indices ie only take into account the size of the table is 300 GB including the primary key index, the table may have other rates which would increase the volume of transfer, for example, if the table also has 5 indices with an overall size of 100 GB, the total would be 400 GB transfer Since the aim is to transfer the essence in the shortest possible time, is acceptable only transfer data and then create the indexes on the destination db. Course generating a script from the production database with the sentences to create them.
With two databases one in 9i (source) and the other in 10g (target) mounted on HP-UX servers and network to 100 Mbps, the numbers obtained to transfer a complete scheme of 2.3 Gb are

Data + Indices + bd unoptimized destination:



Mb Total Compression Method Transfer Time compress gzip Data - optimized target rate + bd:
2.336
362 Mb 16:16 Min 2.336
254 Mb 15:33 min


Mb Total transfer time compress gzip According to Oracle, an import lasts approximately 2 to 2.5 times the length of the export. For example, the transfer time shown in the tables is the duration of the import, rather both, both the exporter and importer, hence the best scenario is 6 minutes 12 seconds (export / import optimized) and the worst case scenario is 15 minutes 33 seconds (export / import Traditional), so if you apply it to an environment of real database in the order of hundreds of gigabytes we see that the difference is quite substantial especially by the fact that the migration is done in one step and depend only on the processing power of servers and network speed, what about the disk space? is no longer a problem. Disadvantages
Compression Method 1.387
361 Mb 7:01 Min 1.387
254 Mb 6:12 min As can be seen, the volume of bytes transferred is nearly the same with or without indices, however, the difference is the transfer time which is reduced by just over 50%. The exercise also includes a comparison using two compression methods, the most efficient gzip both in bytes and time.





This method does not work with Data Pump

If the transfer is interrupted due to network problems, you have to restart the process from scratch.

0 comments:

Post a Comment