-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[llvm] Use llvm::append_range (NFC) #136066
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[llvm] Use llvm::append_range (NFC) #136066
Conversation
This patch replaces: llvm::copy(Src, std::back_inserter(Dst)); with: llvm::append_range(Dst, Src); for breavity. One side benefit is that llvm::append_range eventually calls llvm::SmallVector::reserve if Dst is of llvm::SmallVector.
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-llvm-globalisel Author: Kazu Hirata (kazutakahirata) ChangesThis patch replaces: llvm::copy(Src, std::back_inserter(Dst)); with: llvm::append_range(Dst, Src); for breavity. One side benefit is that llvm::append_range eventually calls Full diff: https://github.com/llvm/llvm-project/pull/136066.diff 11 Files Affected:
diff --git a/llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h b/llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
index 66ea5ecbc4bfb..57c408968b19a 100644
--- a/llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
+++ b/llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
@@ -1159,8 +1159,7 @@ void BlockFrequencyInfoImpl<BT>::setBlockFreq(const BlockT *BB,
template <class BT> void BlockFrequencyInfoImpl<BT>::initializeRPOT() {
const BlockT *Entry = &F->front();
- RPOT.reserve(F->size());
- std::copy(po_begin(Entry), po_end(Entry), std::back_inserter(RPOT));
+ llvm::append_range(RPOT, post_order(Entry));
std::reverse(RPOT.begin(), RPOT.end());
assert(RPOT.size() - 1 <= BlockNode::getMaxIndex() &&
diff --git a/llvm/include/llvm/Bitcode/BitcodeConvenience.h b/llvm/include/llvm/Bitcode/BitcodeConvenience.h
index 009a086a339cb..b7f63664409c2 100644
--- a/llvm/include/llvm/Bitcode/BitcodeConvenience.h
+++ b/llvm/include/llvm/Bitcode/BitcodeConvenience.h
@@ -266,7 +266,7 @@ template <typename ElementTy> class BCRecordCoding<BCArray<ElementTy>> {
ElementTy::assertValid(element);
#endif
buffer.reserve(buffer.size() + std::distance(array.begin(), array.end()));
- std::copy(array.begin(), array.end(), std::back_inserter(buffer));
+ llvm::append_range(buffer, array);
Stream.EmitRecordWithAbbrev(code, buffer);
}
diff --git a/llvm/include/llvm/IR/DiagnosticInfo.h b/llvm/include/llvm/IR/DiagnosticInfo.h
index 779c88993b71c..8743f8058c382 100644
--- a/llvm/include/llvm/IR/DiagnosticInfo.h
+++ b/llvm/include/llvm/IR/DiagnosticInfo.h
@@ -693,7 +693,7 @@ class DiagnosticInfoIROptimization : public DiagnosticInfoOptimizationBase {
Orig.RemarkName, Orig.getFunction(), Orig.getLocation()),
CodeRegion(Orig.getCodeRegion()) {
*this << Prepend;
- std::copy(Orig.Args.begin(), Orig.Args.end(), std::back_inserter(Args));
+ llvm::append_range(Args, Orig.Args);
}
/// Legacy interface.
diff --git a/llvm/lib/Analysis/LoopAccessAnalysis.cpp b/llvm/lib/Analysis/LoopAccessAnalysis.cpp
index 0cb1332eb337b..9a7d361b5b512 100644
--- a/llvm/lib/Analysis/LoopAccessAnalysis.cpp
+++ b/llvm/lib/Analysis/LoopAccessAnalysis.cpp
@@ -563,7 +563,7 @@ void RuntimePointerChecking::groupChecks(
// We've computed the grouped checks for this partition.
// Save the results and continue with the next one.
- llvm::copy(Groups, std::back_inserter(CheckingGroups));
+ llvm::append_range(CheckingGroups, Groups);
}
}
diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index 85a6d67609798..931e4fe19e69a 100644
--- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -232,7 +232,7 @@ ArrayRef<Register> IRTranslator::getOrCreateVRegs(const Value &Val) {
unsigned Idx = 0;
while (auto Elt = C.getAggregateElement(Idx++)) {
auto EltRegs = getOrCreateVRegs(*Elt);
- llvm::copy(EltRegs, std::back_inserter(*VRegs));
+ llvm::append_range(*VRegs, EltRegs);
}
} else {
assert(SplitTys.size() == 1 && "unexpectedly split LLT");
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index ee271234d3119..0aa853389bf1a 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -592,7 +592,7 @@ llvm::createLibcall(MachineIRBuilder &MIRBuilder, const char *Name,
isLibCallInTailPosition(Result, *MI, MIRBuilder.getTII(),
*MIRBuilder.getMRI());
- std::copy(Args.begin(), Args.end(), std::back_inserter(Info.OrigArgs));
+ llvm::append_range(Info.OrigArgs, Args);
if (!CLI.lowerCall(MIRBuilder, Info))
return LegalizerHelper::UnableToLegalize;
@@ -708,7 +708,7 @@ llvm::createMemLibcall(MachineIRBuilder &MIRBuilder, MachineRegisterInfo &MRI,
MI.getOperand(MI.getNumOperands() - 1).getImm() &&
isLibCallInTailPosition(Info.OrigRet, MI, MIRBuilder.getTII(), MRI);
- std::copy(Args.begin(), Args.end(), std::back_inserter(Info.OrigArgs));
+ llvm::append_range(Info.OrigArgs, Args);
if (!CLI.lowerCall(MIRBuilder, Info))
return LegalizerHelper::UnableToLegalize;
@@ -855,7 +855,7 @@ createAtomicLibcall(MachineIRBuilder &MIRBuilder, MachineInstr &MI) {
Info.Callee = MachineOperand::CreateES(Name);
Info.OrigRet = CallLowering::ArgInfo(RetRegs, RetTy, 0);
- std::copy(Args.begin(), Args.end(), std::back_inserter(Info.OrigArgs));
+ llvm::append_range(Info.OrigArgs, Args);
if (!CLI.lowerCall(MIRBuilder, Info))
return LegalizerHelper::UnableToLegalize;
diff --git a/llvm/lib/CodeGen/ModuloSchedule.cpp b/llvm/lib/CodeGen/ModuloSchedule.cpp
index 9fd7443dc0d00..f028f01c58290 100644
--- a/llvm/lib/CodeGen/ModuloSchedule.cpp
+++ b/llvm/lib/CodeGen/ModuloSchedule.cpp
@@ -1855,9 +1855,9 @@ void PeelingModuloScheduleExpander::peelPrologAndEpilogs() {
// Create a list of all blocks in order.
SmallVector<MachineBasicBlock *, 8> Blocks;
- llvm::copy(PeeledFront, std::back_inserter(Blocks));
+ llvm::append_range(Blocks, PeeledFront);
Blocks.push_back(BB);
- llvm::copy(PeeledBack, std::back_inserter(Blocks));
+ llvm::append_range(Blocks, PeeledBack);
// Iterate in reverse order over all instructions, remapping as we go.
for (MachineBasicBlock *B : reverse(Blocks)) {
diff --git a/llvm/lib/ExecutionEngine/JITLink/JITLink.cpp b/llvm/lib/ExecutionEngine/JITLink/JITLink.cpp
index 15a8fcf312ade..e1209e1e95496 100644
--- a/llvm/lib/ExecutionEngine/JITLink/JITLink.cpp
+++ b/llvm/lib/ExecutionEngine/JITLink/JITLink.cpp
@@ -313,7 +313,7 @@ void LinkGraph::dump(raw_ostream &OS) {
OS << "section " << Sec->getName() << ":\n\n";
std::vector<Block *> SortedBlocks;
- llvm::copy(Sec->blocks(), std::back_inserter(SortedBlocks));
+ llvm::append_range(SortedBlocks, Sec->blocks());
llvm::sort(SortedBlocks, [](const Block *LHS, const Block *RHS) {
return LHS->getAddress() < RHS->getAddress();
});
@@ -339,7 +339,7 @@ void LinkGraph::dump(raw_ostream &OS) {
if (!B->edges_empty()) {
OS << " edges:\n";
std::vector<Edge> SortedEdges;
- llvm::copy(B->edges(), std::back_inserter(SortedEdges));
+ llvm::append_range(SortedEdges, B->edges());
llvm::sort(SortedEdges, [](const Edge &LHS, const Edge &RHS) {
return LHS.getOffset() < RHS.getOffset();
});
diff --git a/llvm/lib/Transforms/Scalar/GVNSink.cpp b/llvm/lib/Transforms/Scalar/GVNSink.cpp
index d0f844d7cd36a..6f88408abfdbc 100644
--- a/llvm/lib/Transforms/Scalar/GVNSink.cpp
+++ b/llvm/lib/Transforms/Scalar/GVNSink.cpp
@@ -199,8 +199,8 @@ class ModelledPHI {
SmallSetVector<BasicBlock *, 4> &B,
const DenseMap<const BasicBlock *, unsigned> &BlockOrder) {
// The order of Values and Blocks are already ordered by the caller.
- llvm::copy(V, std::back_inserter(Values));
- llvm::copy(B, std::back_inserter(Blocks));
+ llvm::append_range(Values, V);
+ llvm::append_range(Blocks, B);
verifyModelledPHI(BlockOrder);
}
@@ -208,7 +208,7 @@ class ModelledPHI {
/// TODO: Figure out a way to verifyModelledPHI in this constructor.
ModelledPHI(ArrayRef<Instruction *> Insts, unsigned OpNum,
SmallSetVector<BasicBlock *, 4> &B) {
- llvm::copy(B, std::back_inserter(Blocks));
+ llvm::append_range(Blocks, B);
for (auto *I : Insts)
Values.push_back(I->getOperand(OpNum));
}
diff --git a/llvm/tools/llvm-jitlink/llvm-jitlink.cpp b/llvm/tools/llvm-jitlink/llvm-jitlink.cpp
index 8a17abd58e98a..54b90cd7c7506 100644
--- a/llvm/tools/llvm-jitlink/llvm-jitlink.cpp
+++ b/llvm/tools/llvm-jitlink/llvm-jitlink.cpp
@@ -2360,7 +2360,7 @@ static Error addLibraries(Session &S,
SmallVector<char, 256> LibPath;
LibPath.reserve(SearchPath.size() + strlen("lib") + LL.LibName.size() +
LibExt.size() + 2); // +2 for pathsep, null term.
- llvm::copy(SearchPath, std::back_inserter(LibPath));
+ llvm::append_range(LibPath, SearchPath);
if (LibExt != ".lib" && LibExt != ".dll")
sys::path::append(LibPath, "lib" + LL.LibName + LibExt);
else
diff --git a/llvm/utils/TableGen/SearchableTableEmitter.cpp b/llvm/utils/TableGen/SearchableTableEmitter.cpp
index e91baf98e9ffc..7251a1ba545d5 100644
--- a/llvm/utils/TableGen/SearchableTableEmitter.cpp
+++ b/llvm/utils/TableGen/SearchableTableEmitter.cpp
@@ -701,8 +701,7 @@ void SearchableTableEmitter::collectTableEntries(
}
SearchIndex Idx;
- std::copy(Table.Fields.begin(), Table.Fields.end(),
- std::back_inserter(Idx.Fields));
+ llvm::append_range(Idx.Fields, Table.Fields);
llvm::sort(Table.Entries, [&](const Record *LHS, const Record *RHS) {
return compareBy(LHS, RHS, Idx);
});
|
This patch replaces: llvm::copy(Src, std::back_inserter(Dst)); with: llvm::append_range(Dst, Src); for breavity. One side benefit is that llvm::append_range eventually calls llvm::SmallVector::reserve if Dst is of llvm::SmallVector.
@@ -1159,8 +1159,7 @@ void BlockFrequencyInfoImpl<BT>::setBlockFreq(const BlockT *BB, | |||
|
|||
template <class BT> void BlockFrequencyInfoImpl<BT>::initializeRPOT() { | |||
const BlockT *Entry = &F->front(); | |||
RPOT.reserve(F->size()); | |||
std::copy(po_begin(Entry), po_end(Entry), std::back_inserter(RPOT)); | |||
llvm::append_range(RPOT, post_order(Entry)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reserve here got lost. No idea if that's the cause or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nikic Ack. Let me restore reserve
. It's possible that llvm::append_range
may not be (eventually) calling std::vector::reserve
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could detect if the container type has .reserve
and call it when available in append_range
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or actually, I wonder if we use it with any container types that do not have reserve...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, append_values
does call reserve
:
llvm-project/llvm/include/llvm/ADT/STLExtras.h
Lines 2126 to 2130 in b30100b
void append_values(Container &C, Args &&...Values) { | |
C.reserve(range_size(C) + sizeof...(Args)); | |
// Append all values one by one. | |
((void)C.insert(C.end(), std::forward<Args>(Values)), ...); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That wouldn't help here anyway, because the post order iterator does not have a known length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good point -- the inserted range may not support fast size calculation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've posted #136215 to store the call to std::vector::reserve
.
Unfortunately, restoring the reserve() actually made things worse, not better: https://llvm-compile-time-tracker.com/compare.php?from=209d8c8fa4fe16ef41003da17387f7c271002668&to=69b9ddc76418c6f60ce7751efb5beb1f3b3be3ff&stat=instructions:u I just looked at what SmallVector::insert() on iterators does, and I found this gem:
So we're calling This seems like a pretty big footgun for the use of insert() and append_range(). I think the current insert() implementation should only be used for random access iterators, and there should be a different one for input iterators that only iterates once. (As a side note, does C++ really require the full random access kitchen sink when |
Strictly speaking SmallVector::insert() should probably be requiring a forward iterator, not an input iterator, for it's current implementation. (Though PostOrderIterator is also a forward iterator, but maybe we could not make it one.) |
Gah, this is bad, let me see if this is fixable based on iterator categories -- we don't have to reserve for all of them. Until then, should we revert this? |
This is a partial revert of #136066, which introduced a compile-time regression. SmallVector::insert() computes the PostOrderIterator twice in order to call reserve().
I reverted just the problematic line in f696508, after confirming that it was indeed the cause of the regression. |
@kuhar @nikic FWIW, in https://en.cppreference.com/w/cpp/container/vector/append_range |
Only calculate the size and reserve upfront when the input iterators support cheap (O(1)) distance. This is a follow up to: llvm#136066 (comment).
Only calculate the size and reserve upfront when the input iterators support cheap (O(1)) distance. This is a follow up to: llvm#136066 (comment). Address comments
This is a partial revert of llvm#136066, which introduced a compile-time regression. SmallVector::insert() computes the PostOrderIterator twice in order to call reserve().
This is a partial revert of llvm#136066, which introduced a compile-time regression. SmallVector::insert() computes the PostOrderIterator twice in order to call reserve().
This is a partial revert of llvm#136066, which introduced a compile-time regression. SmallVector::insert() computes the PostOrderIterator twice in order to call reserve().
This patch replaces:
llvm::copy(Src, std::back_inserter(Dst));
with:
llvm::append_range(Dst, Src);
for breavity.
One side benefit is that llvm::append_range eventually calls
llvm::SmallVector::reserve if Dst is of llvm::SmallVector.