GCC Middle and Back End API Reference
|
Go to the source code of this file.
Functions | |
void | renumber_gimple_stmt_uids (void) |
void | renumber_gimple_stmt_uids_in_blocks (basic_block *, int) |
void | dump_variable (FILE *, tree) |
void | debug_variable (tree) |
void | dump_dfa_stats (FILE *) |
void | debug_dfa_stats (void) |
tree | ssa_default_def (struct function *, tree) |
void | set_ssa_default_def (struct function *, tree, tree) |
tree | get_or_create_ssa_default_def (struct function *, tree) |
tree | get_ref_base_and_extent (tree, HOST_WIDE_INT *, HOST_WIDE_INT *, HOST_WIDE_INT *) |
tree | get_addr_base_and_unit_offset (tree, HOST_WIDE_INT *) |
bool | stmt_references_abnormal_ssa_name (gimple) |
void | dump_enumerated_decls (FILE *, int) |
static tree | get_addr_base_and_unit_offset_1 (tree exp, HOST_WIDE_INT *poffset, tree(*valueize)(tree)) |
void debug_dfa_stats | ( | void | ) |
Dump DFA statistics on stderr.
References memset().
void debug_variable | ( | tree | ) |
void dump_dfa_stats | ( | FILE * | ) |
void dump_enumerated_decls | ( | FILE * | , |
int | |||
) |
void dump_variable | ( | FILE * | , |
tree | |||
) |
tree get_addr_base_and_unit_offset | ( | tree | , |
HOST_WIDE_INT * | |||
) |
|
inlinestatic |
Returns the base object and a constant BITS_PER_UNIT offset in *POFFSET that denotes the starting address of the memory access EXP. Returns NULL_TREE if the offset is not constant or any component is not BITS_PER_UNIT-aligned. VALUEIZE if non-NULL is used to valueize SSA names. It should return its argument or a constant if the argument is known to be constant.
??? This is a static inline here to avoid the overhead of the indirect calls to VALUEIZE. But is this overhead really that significant? And should we perhaps just rely on WHOPR to specialize the function?
Compute cumulative byte-offset for nested component-refs and array-refs, and find the ultimate containing object.
If the resulting bit-offset is constant, track it.
Hand back the decl for MEM[&decl, off].
Hand back the decl for MEM[&decl, off].
References array_ref_element_size(), array_ref_low_bound(), component_ref_field_offset(), double_int::high, HOST_WIDE_INT, integer_zerop(), mem_ref_offset(), and double_int::to_shwi().
tree get_ref_base_and_extent | ( | tree | exp, |
HOST_WIDE_INT * | poffset, | ||
HOST_WIDE_INT * | psize, | ||
HOST_WIDE_INT * | pmax_size | ||
) |
If EXP is a handled component reference for a structure, return the base variable. The access range is delimited by bit positions *POFFSET and *POFFSET + *PMAX_SIZE. The access size is *PSIZE bits. If either *PSIZE or *PMAX_SIZE is -1, they could not be determined. If *PSIZE and *PMAX_SIZE are equal, the access is non-variable.
First get the final access size from just the outermost expression.
Initially, maxsize is the same as the accessed element size. In the following it will only grow (or become -1).
Compute cumulative bit-offset for nested component-refs and array-refs, and find the ultimate containing object.
If we had seen a variable array ref already and we just referenced the last field of a struct or a union member then we have to adjust maxsize by the padding at the end of our field.
We need to adjust maxsize to the whole structure bitsize. But we can subtract any constant offset seen so far, because that would get us out of the structure otherwise.
If the resulting bit-offset is constant, track it.
An array ref with a constant index up in the structure hierarchy will constrain the size of any variable array ref lower in the access hierarchy.
We need to adjust maxsize to the whole array bitsize. But we can subtract any constant offset seen so far, because that would get us outside of the array otherwise.
Remember that we have seen an array ref with a variable index.
Hand back the decl for MEM[&decl, off].
Hand back the decl for MEM[&decl, off].
Via the variable index or index2 we can reach the whole object.
We need to deal with variable arrays ending structures such as struct { int length; int a[1]; } x; x.a[d] struct { struct { int a; int b; } a[1]; } x; x.a[d].a struct { struct { int a[1]; } a[1]; } x; x.a[0][d], x.a[d][0] struct { int len; union { int a[1]; struct X x; } u; } x; x.u.a[d] where we do not know maxsize for variable index accesses to the array. The simplest way to conservatively deal with this is to punt in the case that offset + maxsize reaches the base type boundary. This needs to include possible trailing padding that is there for alignment purposes.
In case of a decl or constant base object we can do better.
If maxsize is unknown adjust it according to the size of the base decl.
If maxsize is unknown adjust it according to the size of the base type constant.
??? Due to negative offsets in ARRAY_REF we can end up with negative bit_offset here. We might want to store a zero offset in this case.
Referenced by fold_builtin_logarithm(), get_ssa_def_if_simple_copy(), parm_ref_data_preserved_p(), and vn_reference_lookup_3().
void renumber_gimple_stmt_uids | ( | void | ) |
Renumber all of the gimple stmt uids.
References cfun, gimple_set_uid(), gsi_end_p(), gsi_next(), gsi_start_bb(), gsi_start_phis(), gsi_stmt(), and inc_gimple_stmt_max_uid().
void renumber_gimple_stmt_uids_in_blocks | ( | basic_block * | , |
int | |||
) |
bool stmt_references_abnormal_ssa_name | ( | gimple | ) |