Skip to content

Gitflow: Sync develop into master #11674

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 29 commits into from
Apr 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
abd79ea
test(browser-integration-tests): Test that errors during pageload/nav…
Lms24 Apr 15, 2024
1a715fc
Merge pull request #11616 from getsentry/master
github-actions[bot] Apr 15, 2024
8cb70e3
test(browser-integration-tests): Test trace lifetime for outgoing `fe…
Lms24 Apr 16, 2024
39c8290
test(browser-integration-tests): Add trace lifetime tests for `<meta>…
Lms24 Apr 16, 2024
ee4091e
fix(browser): Don't assume window.document is available (#11602)
Apr 16, 2024
452bcae
fix(node): Allow use of `NodeClient` without calling `init` (#11585)
timfish Apr 16, 2024
1a22856
feat(node): Collect Local Variables via a worker (#11586)
timfish Apr 16, 2024
44bc6cf
fix(nextjs): Escape Next.js' OpenTelemetry instrumentation (#11625)
Apr 16, 2024
d5ac938
ref(core): Rename `Hub` to `AsyncContextStack` & remove unneeded meth…
mydea Apr 16, 2024
379a9e5
fix(node): Ensure DSC is correctly set in envelope headers (#11628)
mydea Apr 16, 2024
f1c4611
test(browser-integration-test): Add trace lifetime tests for XHR requ…
Lms24 Apr 16, 2024
0d2c960
feat(nextjs): Skip OTEL root spans emitted by Next.js (#11623)
mydea Apr 16, 2024
919f60b
doc: Add clarifying comment for hub on ACS (#11633)
mydea Apr 16, 2024
8b6f838
feat(opentelemetry): Update OTEL packages & relax some version ranges…
mydea Apr 16, 2024
38758b9
fix(feedback): Fix timeout on feedback submission (#11619)
c298lee Apr 16, 2024
08e26ab
feat(deps): bump @opentelemetry/instrumentation-koa from 0.37.0 to 0.…
dependabot[bot] Apr 16, 2024
6f5dd50
feat(deps): bump @opentelemetry/instrumentation-pg from 0.38.0 to 0.4…
dependabot[bot] Apr 16, 2024
2c840c3
feat(deps): bump @opentelemetry/instrumentation-hapi from 0.34.0 to 0…
dependabot[bot] Apr 16, 2024
daf2edf
feat(core): Add `server.address` to browser `http.client` spans (#11634)
mydea Apr 17, 2024
cfcd226
feat(browser): Update `propagationContext` on `spanEnd` to keep trace…
Lms24 Apr 17, 2024
e474a57
test(browser-integration-tests): Add trace lifetime tests in TwP scen…
Lms24 Apr 17, 2024
c71922b
feat(node): Add `connect` instrumentation (#11651)
onurtemizkan Apr 17, 2024
d876255
ci(auto-release): Accept Pre-Releases in auto-release (#11655)
s1gr1d Apr 17, 2024
c014a79
Merge pull request #11658 from getsentry/master
github-actions[bot] Apr 17, 2024
bd526ab
fix: Missing ErrorEvent export are added to node, browser, bun, deno,…
AndreyTheWeb Apr 17, 2024
308e743
test(ci): Adjust `detectFlakyTests` to account for multiple tests in …
Lms24 Apr 17, 2024
f543033
ci: Streamline naming of CI workflows (#11656)
mydea Apr 18, 2024
30aa211
docs(publish-release): Improve docs on release publishing (#11672)
s1gr1d Apr 18, 2024
3e614ba
ci: Run CI on merge_group checks requested (#11675)
mydea Apr 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/flaky.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ body:
id: job-name
attributes:
label: Name of Job
placeholder: Build & Test / Nextjs (Node 14) Tests
placeholder: "CI: Build & Test / Nextjs (Node 14) Tests"
description: name of job as reported in the status report
validations:
required: true
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/auto-release.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Gitflow - Auto prepare release
name: "Gitflow: Auto prepare release"
on:
pull_request:
types:
Expand All @@ -25,7 +25,7 @@ jobs:
# Parse version from head branch
text: ${{ github.head_ref }}
# match: preprare-release/xx.xx.xx
regex: '^prepare-release\/(\d+\.\d+\.\d+)$'
regex: '^prepare-release\/(\d+\.\d+\.\d+)(?:-(alpha|beta)\.\d+)?$'

- name: Prepare release
uses: getsentry/action-prepare-release@v1
Expand Down
5 changes: 4 additions & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
name: 'Build & Test'
name: 'CI: Build & Test'
on:
push:
branches:
- develop
- master
- release/**
pull_request:
merge_group:
types: [checks_requested]
workflow_dispatch:
inputs:
commit:
Expand Down Expand Up @@ -1051,6 +1053,7 @@ jobs:
'node-nestjs-app',
'node-exports-test-app',
'node-koa-app',
'node-connect-app',
'vue-3',
'webpack-4',
'webpack-5'
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/canary.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: 'Canary Tests'
name: 'CI: Canary Tests'
on:
schedule:
# Run every day at midnight
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/clear-cache.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Clear all GHA caches
name: "Action: Clear all GHA caches"
on:
workflow_dispatch:

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/codeql-analysis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: 'CodeQL'
name: 'CI: CodeQL'

on:
push:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/enforce-license-compliance.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Enforce License Compliance
name: "CI: Enforce License Compliance"

on:
push:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/flaky-test-detector.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: 'Detect flaky tests'
name: 'CI: Detect flaky tests'
on:
workflow_dispatch:
pull_request:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/gitflow-sync-develop.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Gitflow - Sync master into develop
name: "Gitflow: Sync master into develop"
on:
push:
branches:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/issue-package-label.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: 'Tag issue with package label'
name: 'Automation: Tag issue with package label'

on:
issues:
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release-size-info.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Add size info to release
name: "Automation: Add size info to release"
on:
release:
types:
Expand Down Expand Up @@ -27,4 +27,4 @@ jobs:
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
version: ${{ steps.get_version.outputs.version }}
workflow_name: 'Build & Test'
workflow_name: 'CI: Build & Test'
2 changes: 1 addition & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Prepare Release
name: "Action: Prepare Release"
on:
workflow_dispatch:
inputs:
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ _Bad software is everywhere, and we're tired of it. Sentry is on a mission to he
faster, so we can get back to enjoying technology. If you want to join us
[<kbd>**Check out our open positions**</kbd>](https://sentry.io/careers/)_

![Build & Test](https://github.com/getsentry/sentry-javascript/workflows/Build%20&%20Test/badge.svg)
[![codecov](https://codecov.io/gh/getsentry/sentry-javascript/branch/master/graph/badge.svg)](https://codecov.io/gh/getsentry/sentry-javascript)
![Build & Test](https://github.com/getsentry/sentry-javascript/workflows/CI:%20Build%20&%20Test/badge.svg)
[![codecov](https://codecov.io/gh/getsentry/sentry-javascript/branch/develop/graph/badge.svg)](https://codecov.io/gh/getsentry/sentry-javascript)
[![npm version](https://img.shields.io/npm/v/@sentry/core.svg)](https://www.npmjs.com/package/@sentry/core)
[![Discord](https://img.shields.io/discord/621778831602221064)](https://discord.gg/Ww9hbqr)

Expand Down
89 changes: 71 additions & 18 deletions dev-packages/browser-integration-tests/scripts/detectFlakyTests.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,30 @@
import * as childProcess from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as glob from 'glob';

/**
* The number of browsers we run the tests in.
*/
const NUM_BROWSERS = 3;

/**
* Assume that each test runs for 3s.
*/
const ASSUMED_TEST_DURATION_SECONDS = 3;

/**
* We keep the runtime of the detector if possible under 30min.
*/
const MAX_TARGET_TEST_RUNTIME_SECONDS = 30 * 60;

/**
* Running one test 50x is what we consider enough to detect flakiness.
* Running one test 5x is the bare minimum
*/
const MAX_PER_TEST_RUN_COUNT = 50;
const MIN_PER_TEST_RUN_COUNT = 5;

async function run(): Promise<void> {
let testPaths: string[] = [];

Expand All @@ -20,23 +43,8 @@ ${changedPaths.join('\n')}
}
}

let runCount: number;
if (process.env.TEST_RUN_COUNT === 'AUTO') {
// No test paths detected: run everything 5x
runCount = 5;

if (testPaths.length > 0) {
// Run everything up to 100x, assuming that total runtime is less than 60min.
// We assume an average runtime of 3s per test, times 4 (for different browsers) = 12s per detected testPaths
// We want to keep overall runtime under 30min
const testCount = testPaths.length * 4;
const expectedRuntimePerTestPath = testCount * 3;
const expectedRuntime = Math.floor((30 * 60) / expectedRuntimePerTestPath);
runCount = Math.min(50, Math.max(expectedRuntime, 5));
}
} else {
runCount = parseInt(process.env.TEST_RUN_COUNT || '10');
}
const repeatEachCount = getPerTestRunCount(testPaths);
console.log(`Running tests ${repeatEachCount} times each.`);

const cwd = path.join(__dirname, '../');

Expand All @@ -45,7 +53,7 @@ ${changedPaths.join('\n')}
const cp = childProcess.spawn(
`npx playwright test ${
testPaths.length ? testPaths.join(' ') : './suites'
} --reporter='line' --repeat-each ${runCount}`,
} --reporter='line' --repeat-each ${repeatEachCount}`,
{ shell: true, cwd },
);

Expand Down Expand Up @@ -88,6 +96,33 @@ ${changedPaths.join('\n')}
console.log(`☑️ All tests passed.`);
}

/**
* Returns how many time one test should run based on the chosen mode and a bunch of heuristics
*/
function getPerTestRunCount(testPaths: string[]) {
if (process.env.TEST_RUN_COUNT === 'AUTO' && testPaths.length > 0) {
// Run everything up to 100x, assuming that total runtime is less than 60min.
// We assume an average runtime of 3s per test, times 4 (for different browsers) = 12s per detected testPaths
// We want to keep overall runtime under 30min
const estimatedNumberOfTests = testPaths.map(getApproximateNumberOfTests).reduce((a, b) => a + b);
console.log(`Estimated number of tests: ${estimatedNumberOfTests}`);

// tests are usually run against all browsers we test with, so let's assume this
const testRunCount = estimatedNumberOfTests * NUM_BROWSERS;
console.log(`Estimated test runs for one round: ${testRunCount}`);

const estimatedTestRuntime = testRunCount * ASSUMED_TEST_DURATION_SECONDS;
console.log(`Estimated test runtime: ${estimatedTestRuntime}s`);

const expectedPerTestRunCount = Math.floor(MAX_TARGET_TEST_RUNTIME_SECONDS / estimatedTestRuntime);
console.log(`Expected per-test run count: ${expectedPerTestRunCount}`);

return Math.min(MAX_PER_TEST_RUN_COUNT, Math.max(expectedPerTestRunCount, MIN_PER_TEST_RUN_COUNT));
}

return parseInt(process.env.TEST_RUN_COUNT || '5');
}

function getTestPaths(): string[] {
const paths = glob.sync('suites/**/test.{ts,js}', {
cwd: path.join(__dirname, '../'),
Expand All @@ -111,4 +146,22 @@ function logError(error: unknown) {
}
}

/**
* Definitely not bulletproof way of getting the number of tests in a file :D
* We simply match on `it(`, `test(`, etc and count the matches.
*
* Note: This test completely disregards parameterized tests (`it.each`, etc) or
* skipped/disabled tests and other edge cases. It's just a rough estimate.
*/
function getApproximateNumberOfTests(testPath: string): number {
try {
const content = fs.readFileSync(path.join(process.cwd(), testPath, 'test.ts'), 'utf-8');
const matches = content.match(/it\(|test\(|sentryTest\(/g);
return Math.max(matches ? matches.length : 1, 1);
} catch (e) {
console.error(`Could not read file ${testPath}`);
return 1;
}
}

run();
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
@sentry:registry=http://127.0.0.1:4873
@sentry-internal:registry=http://127.0.0.1:4873
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{
"name": "node-connect-app",
"version": "1.0.0",
"private": true,
"scripts": {
"start": "ts-node src/app.ts",
"test": "playwright test",
"clean": "npx rimraf node_modules pnpm-lock.yaml",
"typecheck": "tsc",
"test:build": "pnpm install && pnpm run typecheck",
"test:assert": "pnpm test"
},
"dependencies": {
"@sentry/node": "latest || *",
"@sentry/types": "latest || *",
"@sentry/core": "latest || *",
"@sentry/utils": "latest || *",
"@sentry/opentelemetry": "latest || *",
"@types/node": "18.15.1",
"connect": "3.7.0",
"typescript": "4.9.5",
"ts-node": "10.9.1"
},
"devDependencies": {
"@sentry-internal/event-proxy-server": "link:../../../event-proxy-server",
"@playwright/test": "^1.38.1"
},
"volta": {
"extends": "../../package.json"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import type { PlaywrightTestConfig } from '@playwright/test';
import { devices } from '@playwright/test';

const connectPort = 3030;
const eventProxyPort = 3031;

/**
* See https://playwright.dev/docs/test-configuration.
*/
const config: PlaywrightTestConfig = {
testDir: './tests',
/* Maximum time one test can run for. */
timeout: 150_000,
expect: {
/**
* Maximum time expect() should wait for the condition to be met.
* For example in `await expect(locator).toHaveText();`
*/
timeout: 10000,
},
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
retries: 0,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: 'list',
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Maximum time each action such as `click()` can take. Defaults to 0 (no limit). */
actionTimeout: 0,
/* Base URL to use in actions like `await page.goto('/')`. */
baseURL: `http://localhost:${connectPort}`,

/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: 'on-first-retry',
},

/* Configure projects for major browsers */
projects: [
{
name: 'chromium',
use: {
...devices['Desktop Chrome'],
},
},
],

/* Run your local dev server before starting the tests */
webServer: [
{
command: 'pnpm ts-node-script start-event-proxy.ts',
port: eventProxyPort,
},
{
command: 'pnpm start',
port: connectPort,
},
],
};

export default config;
Loading