Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing all operators on CUDA #463

Open
leejet opened this issue Aug 19, 2023 · 2 comments
Open

Implementing all operators on CUDA #463

leejet opened this issue Aug 19, 2023 · 2 comments

Comments

@leejet
Copy link
Contributor

leejet commented Aug 19, 2023

I performed a simple profile on ggml_cuda_op and found that the time spent on memory copying is several times more than the computation time. This is because not all operators have CUDA versions, so during computation, data is frequently copied between the GPU and CPU, which consumes a lot of time. Here's the data:

total time 930us, op time 38us,  malloc time 7us,  copy time 876us
total time 904us, op time 29us,  malloc time 5us,  copy time 860us
total time 3989us, op time 232us,  malloc time 5us,  copy time 3718us
total time 3872us, op time 205us,  malloc time 4us,  copy time 3654us
total time 714us, op time 20us,  malloc time 5us,  copy time 682us
total time 3286us, op time 68us,  malloc time 678us,  copy time 2530us
total time 2119us, op time 108us,  malloc time 6us,  copy time 1993us
total time 923us, op time 36us,  malloc time 5us,  copy time 873us
total time 824us, op time 32us,  malloc time 5us,  copy time 779us
total time 831us, op time 24us,  malloc time 6us,  copy time 794us
total time 3623us, op time 169us,  malloc time 4us,  copy time 3441us
total time 3672us, op time 163us,  malloc time 6us,  copy time 3496us
total time 1180us, op time 21us,  malloc time 6us,  copy time 1148us
total time 2574us, op time 18us,  malloc time 7us,  copy time 2539us
total time 1883us, op time 42us,  malloc time 5us,  copy time 1828us
total time 731us, op time 26us,  malloc time 6us,  copy time 691us
total time 658us, op time 19us,  malloc time 5us,  copy time 627us
total time 773us, op time 20us,  malloc time 7us,  copy time 739us
total time 4271us, op time 201us,  malloc time 5us,  copy time 4054us
total time 4273us, op time 159us,  malloc time 4us,  copy time 4102us
total time 959us, op time 23us,  malloc time 7us,  copy time 921us
total time 2046us, op time 16us,  malloc time 6us,  copy time 2018us
total time 2555us, op time 36us,  malloc time 6us,  copy time 2505us
total time 1115us, op time 23us,  malloc time 7us,  copy time 1077us
total time 732us, op time 18us,  malloc time 6us,  copy time 702us
total time 914us, op time 18us,  malloc time 6us,  copy time 883us
total time 4237us, op time 197us,  malloc time 6us,  copy time 4028us
total time 3310us, op time 147us,  malloc time 3us,  copy time 3149us
total time 753us, op time 16us,  malloc time 5us,  copy time 725us
total time 2458us, op time 15us,  malloc time 7us,  copy time 2429us
total time 1730us, op time 27us,  malloc time 6us,  copy time 1690us
total time 701us, op time 19us,  malloc time 5us,  copy time 670us
total time 669us, op time 16us,  malloc time 4us,  copy time 642us
total time 778us, op time 17us,  malloc time 7us,  copy time 747us
total time 3389us, op time 152us,  malloc time 5us,  copy time 3225us
total time 4181us, op time 156us,  malloc time 4us,  copy time 4009us
total time 626us, op time 17us,  malloc time 4us,  copy time 600us
total time 1888us, op time 14us,  malloc time 5us,  copy time 1862us
total time 1851us, op time 26us,  malloc time 5us,  copy time 1812us
total time 774us, op time 18us,  malloc time 6us,  copy time 743us
total time 638us, op time 15us,  malloc time 5us,  copy time 611us
total time 643us, op time 15us,  malloc time 4us,  copy time 617us
total time 3555us, op time 152us,  malloc time 4us,  copy time 3392us
total time 3711us, op time 147us,  malloc time 4us,  copy time 3550us
total time 686us, op time 18us,  malloc time 6us,  copy time 657us
total time 1745us, op time 14us,  malloc time 6us,  copy time 1721us
total time 2019us, op time 28us,  malloc time 5us,  copy time 1979us
total time 681us, op time 16us,  malloc time 6us,  copy time 652us
total time 657us, op time 15us,  malloc time 4us,  copy time 631us
total time 705us, op time 15us,  malloc time 6us,  copy time 677us
total time 3400us, op time 150us,  malloc time 4us,  copy time 3239us
total time 3450us, op time 128us,  malloc time 4us,  copy time 3310us
total time 683us, op time 17us,  malloc time 5us,  copy time 656us
total time 1846us, op time 14us,  malloc time 5us,  copy time 1819us
total time 2137us, op time 34us,  malloc time 5us,  copy time 2091us
total time 720us, op time 20us,  malloc time 6us,  copy time 687us
total time 704us, op time 16us,  malloc time 5us,  copy time 675us
total time 678us, op time 15us,  malloc time 5us,  copy time 652us
total time 3706us, op time 188us,  malloc time 3us,  copy time 3506us
total time 3375us, op time 144us,  malloc time 6us,  copy time 3216us
total time 650us, op time 15us,  malloc time 4us,  copy time 625us
total time 1964us, op time 15us,  malloc time 5us,  copy time 1937us
total time 1983us, op time 27us,  malloc time 5us,  copy time 1945us
total time 683us, op time 18us,  malloc time 6us,  copy time 642us
total time 656us, op time 14us,  malloc time 5us,  copy time 630us
total time 687us, op time 14us,  malloc time 5us,  copy time 660us
total time 3827us, op time 159us,  malloc time 4us,  copy time 3654us
total time 3476us, op time 144us,  malloc time 4us,  copy time 3318us
total time 755us, op time 17us,  malloc time 4us,  copy time 729us
total time 1956us, op time 15us,  malloc time 6us,  copy time 1928us
total time 1802us, op time 32us,  malloc time 6us,  copy time 1757us
total time 665us, op time 16us,  malloc time 4us,  copy time 638us
total time 797us, op time 19us,  malloc time 5us,  copy time 766us
total time 684us, op time 15us,  malloc time 4us,  copy time 658us
total time 3938us, op time 197us,  malloc time 5us,  copy time 3728us
total time 3402us, op time 153us,  malloc time 5us,  copy time 3235us
total time 660us, op time 18us,  malloc time 5us,  copy time 632us
total time 1917us, op time 16us,  malloc time 6us,  copy time 1887us
total time 1839us, op time 33us,  malloc time 7us,  copy time 1793us
total time 730us, op time 17us,  malloc time 5us,  copy time 699us
total time 705us, op time 15us,  malloc time 5us,  copy time 678us
total time 665us, op time 14us,  malloc time 5us,  copy time 639us
total time 3721us, op time 152us,  malloc time 5us,  copy time 3557us
total time 3829us, op time 196us,  malloc time 6us,  copy time 3617us
  • Code for timing measurement
diff --git a/src/ggml-cuda.cu b/src/ggml-cuda.cu
index 9d42efb..3d2f0fd 100644
--- a/src/ggml-cuda.cu
+++ b/src/ggml-cuda.cu
@@ -5043,6 +5043,12 @@ inline void ggml_cuda_op_scale(

 static void ggml_cuda_op(const ggml_tensor * src0, const ggml_tensor * src1, ggml_tensor * dst,
                          ggml_cuda_op_t op, bool src0_needs_f32, bool flatten_rows) {
+    int tt0 = ggml_time_us();
+    int t0;
+    int t1;
+    int op_time = 0;
+    int copy_time = 0;
+    int malloc_time = 0;
     const int64_t ne00 = src0->ne[0];
     const int64_t ne01 = src0->ne[1];
     const int64_t ne02 = src0->ne[2];
@@ -5157,6 +5163,7 @@ static void ggml_cuda_op(const ggml_tensor * src0, const ggml_tensor * src1, ggm
             CUDA_CHECK(cudaStreamWaitEvent(cudaStream_main, src0_extra->events[g_main_device]));
         }

+        t0 = ggml_time_us();
         if (src0_on_device && src0_is_contiguous) {
             if (src0_is_f32) {
                 src0_ddf[id] = (float *) src0_extra->data_device[id];
gml_tensor * src1, ggm
ers*dst_stride * sizeof(float);
f[id]);
         }
+        t1 = ggml_time_us();
+
+        malloc_time += t1 - t0;

         for (int64_t i03 = 0; i03 < i03_max; i03++) {
             const int64_t i13 = i03 % ne13;
gml_tensor * src1, ggm
                 }

                 // copy src0, src1 to device if necessary
+                t0 = ggml_time_us();
                 if (use_src1 && !src1_stays_on_host) {
                     if (src1->backend == GGML_BACKEND_CPU) {
);
 ggml_tensor * src1, ggm
eam_main);
                     CUDA_CHECK(cudaGetLastError());
                 }
+                t1 = ggml_time_us();
+                copy_time += t1-t0;

                 // do the computation
+                t0 = ggml_time_us();
_i, i02, i01_low, i01_high, i11, cudaStream_main);
                 CUDA_CHECK(cudaGetLastError());
+                t1 = ggml_time_us();
+                op_time += t1 - t0;

                 // copy dst to host or other device if necessary
+                t0 = ggml_time_us();
                 if (!dst_on_device) {
                     void * dst_off_device;
                     cudaMemcpyKind kind;
gml_tensor * src1, ggm
ride*sizeof(float), kind, cudaStream_main));
                     }
                 }
+                t1 = ggml_time_us();
+                copy_time += t1 - t0;

                 // signify to main device that other device is done
                 if (split && g_device_count > 1 && id != g_main_device) {
gml_tensor * src1, ggm
     }
 
     // wait until each device is finished, then free their buffers
+    t0 = ggml_time_us();
     for (int id = 0; id < g_device_count; ++id) {
asf[id] == 0) {
             continue;
gml_tensor * src1, ggm
             ggml_cuda_pool_free(dst_ddf[id], dst_asf[id]);
         }
     }
+    t1 = ggml_time_us();
+    malloc_time += t1 - t0;

     // main device waits for all other devices to be finished
     if (split && g_device_count > 1) {
gml_tensor * src1, ggm
         CUDA_CHECK(cudaSetDevice(g_main_device));
         CUDA_CHECK(cudaDeviceSynchronize());
     }
+    int tt1 = ggml_time_us();
",
+           tt1 - tt0, op_time, malloc_time, copy_time);
 }

sor * dst) {
* params, struct ggml_
     if (params->type == GGML_TASK_INIT || params->type == GGML_TASK_FINALIZE) {
         return true;
     }
+    // printf("run on cuda %s\n", ggml_op_name(tensor->op));
     func(tensor->src[0], tensor->src[1], tensor);
     return true;
 }
@slaren
Copy link
Collaborator

slaren commented Aug 19, 2023

I am not sure if you are already doing this, but the CUDA backend currently requires a lot of manual changes to move the tensors to VRAM. The only example of how to do this currently AFAIK is in llama.cpp. Also keep in mind that these operations are in some cases asynchronous, so you cannot really measure its timings in this way. You can use a tool such as nsight systems instead.

@leejet
Copy link
Contributor Author

leejet commented Aug 19, 2023

Yes, I have copied the necessary tensors to VRAM. It seems I did overlook that some CUDA operations are asynchronous. I will reprofile using Nsight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants