In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Oracle 12c has been released for a long time. Although several sets of single instance oracle 12c have been deployed recently, there has been no opportunity to implement 12c rac in the production environment. Of course, we can not kill the existing 11g rac and replace it with 12c rac, and there is no new project to come up with, but we can't wait until we need to work on it to learn and test again and prepare in advance. We just got a well-configured gigabyte mini-pc from Azure Dragon. Just use it for testing, install proxmox, virtualize a bunch of systems, and then conditionally test and deploy oracle 12c rac.
Oracle to achieve load balancing, do not rely on other third-party tools, take care of all on their own, really awesome! Deploying oracle load balancers with high availability is actually deploying oracle RAC. You have to make a plan before you start the deployment. The planning mainly involves the following aspects:
1. Shared storage: it is the most critical facility of oracle RAC, where many important data files, archives, arbitration and other files are stored, so availability, capacity, performance, cost and other factors need to be considered. In many previous projects, the storage I chose was mostly external arrays, dual controllers, 10000 rpm 2.5in sas or 15000 rpm 3.5in SAS disks, all slots were full and short-term expansion was not considered.
2. Server: computing resources depend on the server, and we also need to consider availability, performance and cost factors. In some projects implemented in the past, we generally use 1U rack server, 64G memory, multi-core multi-thread cpu, double SSD disk (to do raid fault tolerance), and four network interface cards.
3. Network planning: it is divided into at least two network segments, the switch is independent (at least two switches), and it is a full gigabit rate switch, and the network cable should also use six types of network lines. Speaking of this network cable, stepped on a hole, deep memory-a network color ticket project, the server is full of two cabinets, all kinds of facilities are relatively high-end at that time. Special instructions for procurement, must buy mechanism network cable. A group of people tossed about day and night, finally finished debugging, on-line operation is normal. But before long, the oracle RAC cluster appeared a good and a bad situation, login to check the log, check the application, can not find the problem, and finally had to go to the computer room in person. Looking back and forth, looking at the display of the lights, this really found a problem: the port of the heartbeat switch has a light that will turn green and yellow, which must be a problem of speed mismatch. Another line is different from other mechanism lines. The inquiry learned that at the time of purchase, the supplier survived one less than the planned purchase, so he asked the computer room to make one by hand. With the new mechanism and six types of network cables, the problem can be solved.
Once wrote a "Oracle 11g rac production environment deployment detail" article, posted on the 51cto blog, the address is https://blog.51cto.com/sery/1546346, welcome to reference. This article, because there is no real environment to provide (can not be done with the existing production environment, I am afraid the boss will cut me), so it can only be carried out in the virtual environment, but this does not affect everyone to learn and learn from. After all, the basic ideas and methods are the same, but also conducive to experiments and testing.
Prepare the basic environment
When I came to Beijing this time, Huayuan came to a mini host, which was configured with cpu 8 threads, 1TB hard disk and 12G memory, which was very suitable for virtualization, saving electricity, land and quiet. Let's see, isn't it small?
Virtualize with this mini host, create 2 virtual machines, install oracle, and create another virtual machine to install openfiler as shared storage for oracle.
◎ host virtualization processing
Proxmox is highly recommended, and of course I use it myself. The current version is proxmox5.2, which supports iso super-fusion, easy to use, and one-click installation. The official website (www.proxmox.com) downloads the image package and writes it to U disk with ultraIso so that it can boot. If the u disk cannot boot and load, you can do another ultraISO write operation with "raw" as the write format, as shown in the following figure:
The Proxmox installation process is simple and easy to complete, and I won't talk about it any more. The underlying layer of Proxmox is based on debian. In the process of running, the system will execute apt-get update to update the package. In order to avoid errors such as "TASK ERROR: command 'apt-get update' failed: exit code 100", you need to log in to the system (debian) with ssh, modify the file / etc/apt/sources.list.d/pve-enterprise.list, and comment out the only line inside. Of course, you can ignore it.
○ multi-network card processing
Maybe your experimental environment is the same as mine, there is only one physical network card, but at least two network cards are needed to implement oracle rac. What should I do? Just add one piece, and the specific methods are as follows:
1. Select "create" in the proxmox management interface, then select "Linux Bridge", and enter the ip address and mask (other items such as gateway are not required)
2. Make the network settings effective. Ssh logs in to debain and restarts the system. Log in to the system again and use the command "ip add" to see the virtual network interface you just created. As shown in the following figure:
Similarly, this effect can be seen in the web management interface of proxmox.
○ prepares operating system image file
As far as I know, there are two ways to upload an operating system ios image, one is the proxmox web management interface, and the other is to log in to the debian system, enter the directory set by the image file, and obtain it directly with tools such as wget.
1. Upload ios files on the web interface (you need to download the files on the local computer):
In several practices, I always feel that this method is troublesome and slow, and now it is generally not used.
2. Log in to the system to download directly, only once. If it is a server in the computer room, it will save a lot of time than downloading it to the local server and uploading it again.
Root@pve99:~# cd / var/lib/vz/template/iso
Root@pve99:/var/lib/vz/template/iso# wget http://mirrors.163.com/centos/7.5.1804/isos/x86_64/CentOS-7-x86_64-DVD-1804.iso
Root@pve99:/var/lib/vz/template/iso# wget http://mirrors.cn99.com/centos/6.10/isos/x86_64/CentOS-6.10-x86_64-bin-DVD1.iso
After downloading, go to the web management interface to check if it appears in the project.
◎ creates a virtual machine
Since the configuration of host resources required to install oracle rac is exactly the same, you can first create a virtual machine and install the system (do not install oracle), and then generate a second virtual machine by clone, and then change the network settings and put it into use.
○ creates the first virtual machine
The Proxmox web management interface clicks "create virtual machine", gives the virtual machine an easily recognizable name, such as db107, and then moves on to the next step.
Under the "operating system" column, select "use cd/dvd CD Image File (ISO)", and select the pre-uploaded operating system iso in the list box below. As shown in the following figure:
The next step is to allocate disk (32G), cpu (4core), and memory (8G). After creation, it will not be able to meet the requirements. Need to add a hard disk to do the oracle installation directory and create a swap partition, and add a network interface to detect the heartbeat between oracle nodes.
1. Add a hard disk to the virtual machine:
In the management interface, select the virtual machine you just created, in the cascading menu "hardware", click the button "add"
Set the size to 50G, plan 16G for swap, and use the rest for the installation software directory.
2. Add network interface:
The steps are basically the same as adding a hard disk, except that when you go to the "add" drop-down list, select "Network device", as shown in the following figure:
◎ installs the virtual machine operating system
After the virtual machine is created, the web management interface starts the virtual machine, and then click the "> _ console" button on the page to enter the operating system installation interface, as shown in the figure:
The remaining steps are no different from regular system installation and will not be repeated.
◎ installs shared storage openfiler
Openfiler, like proxmox, is provided in the form of iso. Similarly, openfiler requires at least two disks, one for installing the system and one for data sharing. After planning the capacity allocation, you can start the installation, the installation process is very simple, no longer verbose.
The following figure shows the usage of the openfiler disk I have installed, where the disk with large capacity is used for iscsi sharing.
Next, start configuring the storage. Click the "Service" item to open the iscsi service.
Create a partition (Linux Physical Volume) on the free mass disk, followed by a volume group vg-data (named by yourself) and logical volumes; when creating logical volumes, Filesystem / Volume type (file system / volume type) selects "block (ISCSI,FS,etc)" from the drop-down list box. When this is done, click the "iSCSI Targets" menu on the right side of the mouse to add a new issci target (Add new iSCSI Target). If the iscsi service is not started, the add button (Add) is grayed out and cannot proceed to the next step.
Complete the logical unit (LUN) mapping, as shown below:
Because it is an internal network, access can be unrestricted. At this point, the storage side is configured.
◎ server mounts iscsi disk (needs to operate on both hosts)
It only takes a few simple steps to mount the iscsi shared disk on the host and boot it when the system is booted.
○ starts the iscsi service. Centos may not install a familiar and useful ntsysv,yum installation by default. Execute ntsysv and select the iscsi tab, and the next time you boot, the iscsi service will start automatically.
○ scans the iscsi target and records the output, with the following instructions:
[root@db115] # iscsiadm-m discovery-t sendtargets-p 172.16.35.107
172.16.35.107Phlex 3260 iqn.2006-01.com.openfiler:tsn.3ceca0a95110
What is needed is the bold part of the information after the number "1".
○ mounts the destination disk with the following instructions:
# iscsiadm-m node-T iqn.2006-01.com.openfiler:tsn.3ceca0a95110-l
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.3ceca0a95110, portal: 172.16.35.107 Magi 3260] (multiple)
Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.3ceca0a95110, portal: 172.16.35.107 Login to 3260] successful.
○ disk mount verification is performed once by both hosts. The instructions are as follows:
[root@db115 ~] # fdisk-l
... Omit...
Disk / dev/sdc: 51.2 GB, 51170508800 bytes, 99942400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk / dev/sdd: 122.9 GB, 122876329984 bytes, 23 × × 832 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk / dev/sde: 10.2 GB, 10234101760 bytes, 19988480 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
In this way, three volumes are shared and attached to each node.
Deploy oracle 12c rac
It is divided into three stages: preparation before installation, installation of software, and creation of database.
Preparation before ◎ installation
The main steps are: preparing the exchange partition and data partition, setting the relevant hostname and ip mapping, modifying the system configuration and dependency package, and preparing the desktop environment.
◆ prepares to swap partitions, which needs to be performed at each node.
Fdisk / dev/sdb
Mkswap / dev/sdb1
Swapon / dev/sdb1
During the fdisk operation, the partition code is selected as "82" and the size is 18G. After you have done this, check to see if it works with the command free-m. In order to load the swap partition with the boot of the system, you need to modify the file / etc/fstab, add the description, etc., and post it after adding the data partition.
For more information, please see the column "practice of load balancers", which can be reached directly by poking it here.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.