Skip to content

TypeError: headers[headerName].trim is not a function in StartFaceLivenessSessionCommand with RekognitionStreamingClient #7121

Open
@aspepper

Description

@aspepper

Checkboxes for prior research

Describe the bug

I'm encountering a TypeError: headers[headerName].trim is not a function error when attempting to call StartFaceLivenessSessionCommand using the @aws-sdk/client-rekognitionstreaming package (v3.825.0). This issue occurs when passing a Node.js Readable stream as the RequestEventStream parameter. Previously, I faced an Error: Eventstream payload must be a Readable stream when using a WHATWG ReadableStream, which led me to try a Node.js Readable stream instead.

The error suggests that the SDK is attempting to call .trim() on a header value that is not a string, likely introduced by custom middleware or during the request signing process in @smithy/signature-v4. I have tried multiple approaches to resolve this, including serializing events as JSON, using @aws-sdk/eventstream-serde-node, and adjusting headers, but the issue persists.

Environment:

Node.js Version: v22.16.0
Operating System: Windows 11
AWS SDK Packages:
@aws-sdk/client-rekognition: 3.825.0
@aws-sdk/client-rekognitionstreaming: 3.825.0
@aws-sdk/client-s3: 3.825.0
@aws-sdk/client-sts: 3.825.0
@aws-sdk/eventstream-serde-node: 3.370.0
@aws-sdk/node-http-handler: 3.370.0
AWS Region: us-east-1

Steps to Reproduce:

Set up AWS credentials with Full Access for Face Rekognition and AssumeRole for the role arn:aws:iam::[ACCOUNT_ID]:role/my-role.
npm install
node start

The full script (index.js) is provided below:

    // index.js
    
    const fs = require('fs');
    const path = require('path');
    const { Readable } = require('stream');
    const {
      RekognitionClient,
      CreateFaceLivenessSessionCommand,
      GetFaceLivenessSessionResultsCommand
    } = require('@aws-sdk/client-rekognition');
    const {
      RekognitionStreamingClient,
      StartFaceLivenessSessionCommand
    } = require('@aws-sdk/client-rekognitionstreaming');
    const { S3Client, GetObjectCommand } = require('@aws-sdk/client-s3');
    const { NodeHttp2Handler } = require('@aws-sdk/node-http-handler');
    const { STSClient, AssumeRoleCommand } = require('@aws-sdk/client-sts');
    const { createEventStream } = require('@aws-sdk/eventstream-serde-node');
    
    // Configuration
    const ROLE_ARN = 'arn:aws:iam::[ACCOUNT_ID]:role/my-role';
    const S3_BUCKET = '[BUCKET_NAME]';
    const S3_KEY_VIDEO = 'path/to/video.mp4';
    const AWS_REGION = 'us-east-1';
    const VIDEO_WIDTH = 640;
    const VIDEO_HEIGHT = 480;
    const TEMP_VIDEO_PATH = path.resolve(__dirname, 'video.mp4');
    
    // Function to convert ReadableStream to Buffer
    async function streamToBuffer(readableStream) {
      return new Promise((resolve, reject) => {
        const chunks = [];
        readableStream.on('data', (chunk) => chunks.push(chunk));
        readableStream.on('end', () => resolve(Buffer.concat(chunks)));
        readableStream.on('error', reject);
      });
    }
    
    // Function to generate video events
    async function* generateVideoEvents(videoPath, chunkSize = 64 * 1024) {
      const readStream = fs.createReadStream(videoPath, { highWaterMark: chunkSize });
      for await (const chunk of readStream) {
        const event = {
          VideoEvent: {
            VideoChunk: new Uint8Array(chunk),
            TimestampMillis: Date.now()
          }
        };
        // Serialize the event as an event payload
        const serializedEvent = JSON.stringify(event);
        console.log('DEBUG: Serialized event:', serializedEvent);
        yield new Uint8Array(Buffer.from(serializedEvent));
      }
    }
    
    // Main function
    async function main() {
      const rekClient = new RekognitionClient({ region: AWS_REGION });
      const s3Client = new S3Client({ region: AWS_REGION });
    
      // Assume the IAM role
      const stsClient = new STSClient({ region: AWS_REGION });
      let credentials;
      try {
        const assumeRoleResponse = await stsClient.send(
          new AssumeRoleCommand({
            RoleArn: ROLE_ARN,
            RoleSessionName: 'FaceLivenessSession'
          })
        );
        credentials = {
          accessKeyId: assumeRoleResponse.Credentials.AccessKeyId,
          secretAccessKey: assumeRoleResponse.Credentials.SecretAccessKey,
          sessionToken: assumeRoleResponse.Credentials.SessionToken
        };
      } catch (err) {
        console.error('❌ Error assuming IAM role:', err);
        process.exit(1);
      }
    
      const streamingClient = new RekognitionStreamingClient({
        region: AWS_REGION,
        requestHandler: new NodeHttp2Handler(),
        credentials: credentials,
        logger: {
          info: console.log,
          warn: console.warn,
          error: console.error,
          debug: console.debug
        }
      });
    
      console.log(
        'DEBUG: streamingClient.requestHandler.constructor.name =',
        streamingClient.config.requestHandler.constructor.name
      );
    
      // Create session
      console.log('Creating Face Liveness session...');
      let sessionId;
      try {
        const createCmd = new CreateFaceLivenessSessionCommand({
          RoleArn: ROLE_ARN,
          Settings: {
            OutputConfig: {
              S3Bucket: S3_BUCKET,
              S3KeyPrefix: 'face-liveness-output/'
            }
          }
        });
        const createResp = await rekClient.send(createCmd);
        sessionId = createResp.SessionId;
        console.log(`→ Session created with SessionId = [SESSION_ID]`);
      } catch (err) {
        console.error('❌ Error creating Face Liveness session:', err);
        process.exit(1);
      }
    
      // Download video from S3
      console.log(`Downloading video from s3://${S3_BUCKET}/${S3_KEY_VIDEO} ...`);
      try {
        const getObjCmd = new GetObjectCommand({
          Bucket: S3_BUCKET,
          Key: S3_KEY_VIDEO
        });
        const data = await s3Client.send(getObjCmd);
        const videoBuffer = await streamToBuffer(data.Body);
        fs.writeFileSync(TEMP_VIDEO_PATH, videoBuffer);
        console.log(`→ Video saved to: ${TEMP_VIDEO_PATH}`);
      } catch (err) {
        console.error('❌ Error downloading video from S3:', err);
        process.exit(1);
      }
    
      // Start Face Liveness
      console.log('Starting Face Liveness (building ReadableStream of events)...');
      const videoEventsIterable = generateVideoEvents(TEMP_VIDEO_PATH, 64 * 1024);
      const nodeEventStream = Readable.from(videoEventsIterable, { objectMode: true });
    
      try {
        const startCmd = new StartFaceLivenessSessionCommand({
          SessionId: sessionId,
          ChallengeVersions: ['1.0'],
          VideoWidth: VIDEO_WIDTH,
          VideoHeight: VIDEO_HEIGHT,
          RequestEventStream: nodeEventStream
        });
        await streamingClient.send(startCmd);
        console.log('→ StartFaceLivenessSession successfully sent.');
      } catch (err) {
        console.error('❌ Error in StartFaceLivenessSession:', err);
        process.exit(1);
      }
    
      // Wait for processing
      console.log('Waiting 15 seconds for Rekognition to process...');
      await new Promise((resolve) => setTimeout(resolve, 15000));
    
      // Get results
      console.log('Fetching final session results...');
      try {
        const getResultsCmd = new GetFaceLivenessSessionResultsCommand({
          SessionId: sessionId
        });
        const getResultsResp = await rekClient.send(getResultsCmd);
        console.log('→ Session result:\n', JSON.stringify(getResultsResp, null, 2));
        if (getResultsResp.ReferenceImage && getResultsResp.ReferenceImage.S3Object) {
          const { Bucket, Name } = getResultsResp.ReferenceImage.S3Object;
          console.log(`\nReference Image (S3): s3://${Bucket}/${Name}`);
        }
      } catch (err) {
        console.error('❌ Error in GetFaceLivenessSessionResults:', err);
        process.exit(1);
      }
    
      console.log('\n✅ Flow completed successfully.');
    }
    
    main().catch((err) => {
      console.error('Unhandled error:', err);
      process.exit(1);
    });

Error Output:

C:\...\rekognition-liveness-test>node index.js
DEBUG: streamingClient.requestHandler.constructor.name = NodeHttp2Handler

Creating Face Liveness session...
... 


❌ Erro em StartFaceLivenessSession: TypeError: headers[headerName].trim is not a function
    at getCanonicalHeaders (C:\...\rekognition-liveness-test\node_modules\@smithy\signature-v4\dist-cjs\index.js:161:58)
    at SignatureV4.signRequest (C:\...\rekognition-liveness-test\node_modules\@smithy\signature-v4\dist-cjs\index.js:609:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async AwsSdkSigV4Signer.sign (C:\...\rekognition-liveness-test\node_modules\@aws-sdk\core\dist-cjs\submodules\httpAuthSchemes\index.js:108:27)
    at async C:\...\rekognition-liveness-test\node_modules\@smithy\core\dist-cjs\index.js:195:14
    at async C:\...\rekognition-liveness-test\node_modules\@smithy\middleware-retry\dist-cjs\index.js:320:38
    at async C:\...\rekognition-liveness-test\node_modules\@aws-sdk\middleware-websocket\dist-cjs\index.js:129:20
    at async C:\...\rekognition-liveness-test\node_modules\@aws-sdk\middleware-logger\dist-cjs\index.js:33:22
    at async main (C:\...\rekognition-liveness-test\index.js:148:5) {
  '$metadata': { attempts: 1, totalRetryDelay: 0 }
}

Steps Already Taken:

Initially encountered Error: Eventstream payload must be a Readable stream when using a WHATWG ReadableStream. Switched to a Node.js Readable stream with objectMode: true to emit { VideoEvent: { VideoChunk: Uint8Array, TimestampMillis: number } } objects.
Tried serializing events as JSON and yielding Uint8Array from JSON strings.
Installed @aws-sdk/eventstream-serde-node and used createEventStream to serialize the stream, but reverted to a simpler approach in the provided code due to persistent errors.
Tested with a minimal stream containing a single VideoEvent with a test Uint8Array, but the error persisted.
Added middleware to normalize headers and set Content-Type: application/vnd.amazon.eventstream, but removed it in the current code to simplify.
Verified video format (H.264, 640x480) using ffprobe and confirmed it matches VIDEO_WIDTH and VIDEO_HEIGHT.
Tested the endpoint https://streaming-rekognition.us-east-1.amazonaws.com/ in a browser, which returned "The specified URI has an invalid path," indicating the endpoint requires a specific HTTP/2 event stream protocol.
Ensured IAM role (arn:aws:iam::[ACCOUNT_ID]:role/my-role) has permissions for rekognition:CreateFaceLivenessSession, rekognition:StartFaceLivenessSession, rekognition:GetFaceLivenessSessionResults, s3:GetObject, and s3:PutObject, and the user has sts:AssumeRole permissions.

Actual Behavior:

The command fails with TypeError: headers[headerName].trim is not a function in the @smithy/signature-v4 module, suggesting an issue with header processing during request signing.

Additional Context:

The script successfully creates a Face Liveness session and downloads the video from S3, indicating that credentials and permissions are correctly configured.
The error occurs specifically during the StartFaceLivenessSessionCommand when the stream is processed.
The endpoint https://streaming-rekognition.us-east-1.amazonaws.com/ is resolved correctly by the SDK, but direct browser access fails, suggesting a specific protocol requirement.

SDK version number

@aws-sdk/client-rekognition": "^3.825.0"

Which JavaScript Runtime is this issue in?

Node.js

Details of the browser/Node.js/ReactNative version

v22.16.0

Reproduction Steps

Set up AWS credentials with Full Access for Face Rekognition and AssumeRole for the role arn:aws:iam::[ACCOUNT_ID]:role/my-role.
npm install
node start

Observed Behavior

C:\...\rekognition-liveness-test>node index.js
...

❌ Erro em StartFaceLivenessSession: TypeError: headers[headerName].trim is not a function
    at getCanonicalHeaders (C:\...\rekognition-liveness-test\node_modules\@smithy\signature-v4\dist-cjs\index.js:161:58)
    at SignatureV4.signRequest (C:\...\rekognition-liveness-test\node_modules\@smithy\signature-v4\dist-cjs\index.js:609:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async AwsSdkSigV4Signer.sign (C:\...\rekognition-liveness-test\node_modules\@aws-sdk\core\dist-cjs\submodules\httpAuthSchemes\index.js:108:27)
    at async C:\...\rekognition-liveness-test\node_modules\@smithy\core\dist-cjs\index.js:195:14
    at async C:\...\rekognition-liveness-test\node_modules\@smithy\middleware-retry\dist-cjs\index.js:320:38
    at async C:\...\rekognition-liveness-test\node_modules\@aws-sdk\middleware-websocket\dist-cjs\index.js:129:20
    at async C:\...\rekognition-liveness-test\node_modules\@aws-sdk\middleware-logger\dist-cjs\index.js:33:22
    at async main (C:\...\rekognition-liveness-test\index.js:148:5) {
  '$metadata': { attempts: 1, totalRetryDelay: 0 }
}

Expected Behavior

🔑 Taking Role to get temporary credentials...
✅ Temporary credentials obtained.
📤 Creating Face Liveness session...
✅ Session created! SessionId=
WebSocket URI = wss://rekognition-streaming-liveness.us-east-1.amazonaws.com/...
⌛ Waiting for Face Liveness results… (Polling every 2s)
⏳ Polling… Status = CREATED, Partial Confidence = 0.00%
⏳ Polling… Status = SUCCEEDED, Partial Confidence = 96.37%
✅ Final status: SUCCEEDED, Final Confidence = 96.37%
ReferenceImage S3: s3://rekognition-liveness-bucket/ResultadosLiveness//ReferenceImage/...
AuditImages:
s3://rekognition-liveness-bucket/LivenessResults//AuditImages/0001.jpg
...
✅ Flow completed successfully.

Possible Solution

No response

Additional Information/Context

No response

Metadata

Metadata

Assignees

Labels

bugThis issue is a bug.needs-triageThis issue or PR still needs to be triaged.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions