https://sequencediagram.org/
Basically, you can follow the instructions at the left top corner button. Check it out.
Here is my example of the sequence diagram about tracing some source codes of XLA AOT in TensorFlow.
title TFCompile
participant tfcompile_main.cc Main
participant compile.cc
participant tf2xla.cc
participant XlaCompiler
participant CompileOnlyClient
participant CompileOnlyService
tfcompile_main.cc Main->compile.cc:CompileGraph(graph_def, config, flags, &compile_result)
compile.cc->tf2xla.cc:ConvertGraphDefToXla(graph_def, config, client, &computation)
tf2xla.cc->tf2xla.cc:InitGraph(graph_def, config, &graph)
tf2xla.cc->tf2xla.cc:ConvertGraphToXla(std::move(graph), client, computation)
tf2xla.cc->XlaCompiler:compiler.CompileGraph(XlaCompiler::CompileOptions(),\n"tfcompile", std::move(graph), xla_args, &result)
XlaCompiler->XlaCompiler:BuildArguments()
XlaCompiler->XlaCompiler:BuildComputation()
compile.cc->compile.cc:CompileXla(client, computation, aot_opts, compile_result)
compile.cc->CompileOnlyClient:client->CompileAheadOfTime({instance}, aot_opts)\n\n=============\n std::vector<CompileOnlyService::AotXlaComputationInstance> service_instances;\n service_instances.reserve(computations.size());\n for (const AotXlaComputationInstance& instance : computations) {\n service_instances.emplace_back();\n CompileOnlyService::AotXlaComputationInstance& service_instance =\n service_instances.back();\n TF_RET_CHECK(instance.computation != nullptr);\n service_instance.computation = instance.computation->proto();\n service_instance.argument_layouts = instance.argument_layouts;\n service_instance.result_layout = instance.result_layout;\n }\n return compiler_service_->CompileAheadOfTime(service_instances, options, metadata);\n
CompileOnlyClient->CompileOnlyService:compiler_service_->CompileAheadOfTime(service_instances, options, metadata)
I think the most important part of AOT is these functions:
BuildArguments()
949 TF_RETURN_IF_ERROR(BuildArguments(
950 *graph, real_args, options.use_tuple_arg, &builder, context, arg_cores,
951 &arg_expressions, &result->input_mapping, &result->xla_input_shapes,
952 options.is_entry_computation));
BuildComputation()1005 TF_RETURN_IF_ERROR(BuildComputation(
1006 real_args, retvals, arg_cores, retval_cores, context->resources(),
1007 std::move(token_output),
1008 options.is_entry_computation ? options_.shape_representation_fn
1009 : ShapeRepresentationFn{},
1010 options.return_updated_values_for_all_resources,
1011 options.always_return_tuple, &builder, result->computation.get(),
1012 &num_computation_outputs, &num_nonconst_outputs, &result->outputs,
1013 &result->resource_updates, &result->xla_output_shape));
If you want to see what the HLO module is or the dump data from XLA AOT, please check out my another post:
[XLA 研究] How to use XLA AOT compilation in TensorFlow ( Part II )
P.S:
After studying a little bit about the source code in XLA AOT, I summarize the key points in the process of AOT:
DefGraph ==> TFGrapph ==> Argument ==> XLABuilder ==> HloModule ==> Gen Binary Code
No comments:
Post a Comment