Ethereum cudaerror
Hardware
algorithm is difficult to solve in memory. In order to make DAG suitable for memory, each GPU needs 1-2gb of memory. If you get an error prompt: error GPU mining. GPU memory fragmentation? It means you don't have enough memory. GPU mining software is based on OpenCL, amd GPU will be faster than NVIDIA GPU of the same level. ASIC and FPGA are relatively inefficient, so they are blocked. To get OpenCL for chip integration platform, try:
amd SDK OpenCL
NVIDIA CUDA OpenCL
Ubuntu Linux settings
for this quick guide, you will need Ubuntu 14.04 or 15.04 and fglrx image driver. You can also use NVIDIA drives and other platforms, but you have to find your own way to get an effective OpenCL installation, such as genoil's ethminer fork
if you are using 15.04, go to & quot; Software and updates 〉 extra drives & quot; Set to & quot; Use the video driver & quot; from fglrx for AMD graphics accelerator
if you are using 14.04, go to & quot; Software and updates 〉 extra drives & quot; Set to & quot; Use the video driver & quot; from fglrx for AMD graphics accelerator;. Unfortunately, for some people, this method may not work, because there is a known program error in Ubuntu 14.04.02 that will prevent you from switching to the exclusive graphics drive necessary for GPU mining
therefore, if you encounter this program error, go to & quot; "Software and update" update & quot; Select & quot; Pre release reliable update proposal & quot;. Then, back to & quot; Software and updates 〉 extra drives & quot; Set to & quot; Use the video driver & quot; from fglrx for AMD graphics accelerator;. After restarting, it's worth checking that the drive is now properly installed (for example, by going to & quot; Additional drives & quot;)
no matter what you do, if you are using 14.04.02, once installed, do not change the drive or drive configuration. For example, the use of aticonfig – initial (especially the - F, - force option) will & quot; Destruction & quot; Your settings. If you accidentally change the configuration, you will need to uninstall the drive, restart, install the drive again and restart.
using handle in GPU high performance programming_ Error macro function to process cudaerror
static void handleerror (cudaerror) returned by each function_ t err,const char *file,int line ) {
if (err != cudaSuccess) {
printf( "% s in %s at line %d\ n", cudaGetErrorString( err ), file, line );< br />exit( EXIT_ FAILURE );< br />}
}
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
t="code" l="cpp">/
#definegetLastCudaError(msg)_getLastCudaError(msg,_FILE_,_LINE_)
inlinevoid__getLastCudaError(constchar*errorMessage,constchar*file,constintline)
{
cudaError_terr=cudaGetLastError();< br />
if(cudaSuccess!= err)
{
fprintf(stderr,"%s(%i):getLastCudaError()CUDAerror:%s:(%d)%s.
",
file,line,errorMessage,(int)err,cudaGetErrorString(err));< br />DEVICE_RESET
exit(EXIT_FAILURE);< br />}
}
#endif
kernel<& <> lt; 1.1>> gt;& gt;();< br />getLastCudaError("ErrorinCalling'kernel'"); pre>
error type: CUDA_ ERROE_ OUT_ OF_ MEMORY
E tensorflow/stream_ executor/cuda/cuda_ driver.cc:924] failed to alloc 17179869184 bytes on host: CUDA_ ERROR_ OUT_ OF_ MEMORY
W ./tensorflow/core/common_ runtime/gpu/pool_ Allocator. H: 195] could not allocate pinned host memory of size: 17179869184
killed
in fact, it is very easy to understand, which roughly means that the GPU size of the server is m
tensorflow, only n (n & lt; M)
in other words, tensorflow tells you that you can't apply for all the resources of GPU, and then quit.
solution:
find the session in the code
Add
config = tf.configproto (allow) before the session definition_ soft_ Placement = true)
# accounting for 70% of GPU resources at most
GPU_ options = tf.GPUOptions(per_ process_ gpu_ memory_ (fraction = 0.7)
# instead of giving tensorflow all GPU resources at first, it will increase
config.gpu on demand_ options.allow_ Growth = true
sess = TF. Session (config = config)
this is no problem
in fact, tensorflow is a greedy tool
even if you use device_ ID specifies that the GPU will also occupy the video memory resources of other GPUs. You must
execute export CUDA before executing the program_ VISIBLE_ Devices = n (n is the visible server number)
to execute Python code. Py will not occupy other GPU resources
recently, before tensorflow, it was Cafe
this week, for three consecutive days, people in the laboratory reported that they occupied too much server resources. It's really tiring. Just use the above method
that is, to execute export CUDA before executing code_ VISIBLE_ Devices = n
only one or a few GPUs can be seen, and other GPUs can't be seen
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
t="code" l="cpp">/
#definegetLastCudaError(msg)_getLastCudaError(msg,_FILE_,_LINE_)
inlinevoid__getLastCudaError(constchar*errorMessage,constchar*file,constintline)
{
cudaError_terr=cudaGetLastError();< br />
if(cudaSuccess!= err)
{
fprintf(stderr,"% s(%i):getLastCudaError()CUDAerror:%s:(%d)%s.
",< br />file,line,errorMessage,(int)err,cudaGetErrorString(err));< br />DEVICE_RESET
exit(EXIT_FAILURE);< br />}
}
#endif
kernel<& <> lt; 1.1>> gt;& gt;();< br />getLastCudaError(" ErrorinCalling kernel'& quot;); pre>