ETL miner login
Publish: 2021-05-20 09:19:50
1. Unknown_Error
2. Large amount of data submission may cause system paralysis, so it is not recommended to do so. If necessary, you can commit in batches when exporting the insert statement. The methods are as follows: 1. Log in to PLSQL and enter the export table. 2. Select the table to export, and then write down the number of rows to submit each time according to the needs in the following figure
3. Search: C: &; Windows\ System32\ wfp\ What's the use of wfpdiag.etl? Sometimes the file causes the system to have a special card.. The total number of reading and writing is very large
4. In case of error code 651, it is suggested that troubleshooting should be carried out from the following aspects:
1. Restart the optical cat and plug in the optical fiber connector tightly
2. Judge whether the status of the light cat indicator is normal. If it is abnormal (the light cat is red / the PON light is flashing / the indicator is abnormal), call local 10086 for broadband alarm
3. Remove the router and directly connect to the optical cat dial-up network. If the dial-up connection fails, you can further feedback through the broadband fault reporting center
4. Reconfigure the router.
1. Restart the optical cat and plug in the optical fiber connector tightly
2. Judge whether the status of the light cat indicator is normal. If it is abnormal (the light cat is red / the PON light is flashing / the indicator is abnormal), call local 10086 for broadband alarm
3. Remove the router and directly connect to the optical cat dial-up network. If the dial-up connection fails, you can further feedback through the broadband fault reporting center
4. Reconfigure the router.
5. 1. You can find the certified procts through UL website. ETL can be inquired through its
2. Seek professional certification agency or consulting company
2. Seek professional certification agency or consulting company
6. What database is used? For Oracle database, log in to the database with sqlplus and create user XXX identified by password default tablespace XXX; Grant connect, resource to XXX
7. At present, there are many ETL scheling tools on the market, including control-m, taskctl, moia, etl-plus, WFT, IBM's own scheling tools. Among them, control-m has the strongest performance, and taskctl is easier to operate. Of course, taskctl, moia and ETL plus all have their own characteristics, which are in line with the usage habits of domestic customers. IBM's own scheling tool is free, but it is not easy to use and maintain, and few customers use it. At present, from the feedback of customers, large customers are more willing to choose control-m. in the future, if domestic scheling tools want to achieve breakthroughs in performance, efficiency, stability, large number of jobs and large number of concurrent operations, they need to strengthen internal repair and practice more in code programming. In my work, the first tool to schele Datastage jobs was moia, and then it was changed to control-m. In short, the former configuration is relatively simple, but there is no integrated graphical interface for the dependencies between jobs. Control-m implements this.
8. Statement: This article is the original article of the blogger and cannot be reproced without the permission of the blogger< Background: through hive operation, we can understand HDFS and maprce of Hadoop
scene: Hadoop al cluster, hive
version: what is the most harmonious version of Hadoop and hive? There is no final conclusion at present, and there will be some bugs in each version
in this example, the version: hadoop-1.0.3 hive-0.10.0-bin
implementation: import the local network access log file into hive
hive demo command list
step1: log in hive to create a table
[ root@master conf]# hive
hive> create table dq_ httplog
(ipdz string,
ll string,
sj string,
khd string,
scene: Hadoop al cluster, hive
version: what is the most harmonious version of Hadoop and hive? There is no final conclusion at present, and there will be some bugs in each version
in this example, the version: hadoop-1.0.3 hive-0.10.0-bin
implementation: import the local network access log file into hive
hive demo command list
step1: log in hive to create a table
[ root@master conf]# hive
hive> create table dq_ httplog
(ipdz string,
ll string,
sj string,
khd string,
Hot content