-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to properly obtain a message hash to send as metadata? #4684
Comments
Why not use a new instance of interceptor for each request message, or something like it, so that the interceptor implementation will not be too complicated? String hash = computeHash(message);
stub.withInterceptors(new BodyHashPoC(hash)).callMethod(message); |
Im trying to build an API around gRPC so there can be a "generic" Interceptor, @Autowired with spring as part of the application context, so when you get an instance of the gRPC client, it already comes wired with the required interceptor that would generate the hash (and other security and informational headers, like a trace id) In the end I'm trying to achieve some level of convenience so that onboarding of new programmers is less difficult. |
In general it's a bad idea to delay sending headers until sending message in interceptor, it's very likely to be broken and should be avoided. What's the particular reason for hash the message as part of a JWT token, and why using Mutual TLS instead isn't sufficient for you? |
I work at a bank, so one of our possible vectors of attack is some malicious user (or hacker) trying to alter the actual parameters of the call. Since the JWT token is signed by a rotating secret, it make's that more difficult, but only if we can also hash the message it self in tandem with the JWT token. So mutual TLS is part of the solution, and the hashing of the message is just another layer of security we would like to maintain. I've done some tests these days, and it seems to work (while I'll admit it's still a brittle design), but I would like to understand more about the threading model of the grpc library... is there any info on this design? o maybe you could guide me to the classes that do the work? Basically I'm trying to understand if by design the listener is runned inside a thread, or if each call to |
|
Thanks @dapengzhang0, finally I had to make an adjustment to make this work in an asynchronous call; I had to watch the state of the listener and check if it's started or not: @Component
public class BodyHashPoC implements ClientInterceptor {
private static final Logger CLASS_LOGGER = LoggerFactory
.getLogger(BodyHashPoC.class.getName());
@Autowired
private TokenContractBuilder tContractBuilder;
@Autowired
private TokenValidationService tValidation;
@Override
public < ReqT, RespT > ClientCall < ReqT, RespT > interceptCall(MethodDescriptor < ReqT, RespT > method,
CallOptions callOptions, Channel next) {
final ClientCall < ReqT, RespT > delegate = next.newCall(method, callOptions);
return new ForwardingClientCall.SimpleForwardingClientCall < ReqT, RespT > (delegate) {
private boolean started = false;
private Metadata headers;
private Listener < RespT > responseListener;
private int numMessages;
@Override
public void start(Listener < RespT > responseListener, Metadata headers) {
if (CLASS_LOGGER.isDebugEnabled()) {
CLASS_LOGGER.debug("In BodyHashPoC.interceptCall().start()");
}
this.headers = headers;
this.responseListener = responseListener;
}
@Override
public void request(int numMessages) {
if (started) {
super.request(numMessages);
} else {
this.numMessages = numMessages;
}
}
@Override
public void sendMessage(ReqT message) {
if (CLASS_LOGGER.isDebugEnabled()) {
CLASS_LOGGER.debug("In BodyHashPoC.interceptCall().sendMessage()");
}
Long hash = null;
if (message instanceof GeneratedMessageV3) {
GeneratedMessageV3 messageV3 = (GeneratedMessageV3) message;
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
final CodedOutputStream cos = CodedOutputStream.newInstance(baos);
try {
messageV3.writeTo(cos);
cos.flush();
hash = TokenSecurityHashFunction.hash(
TokenSecurityHashFunctionType.MURMUR3, baos.toByteArray());
} catch (IOException e) {
e.printStackTrace();
}
} else {
final String className = message == null ? "null" : message.getClass().getName();
throw GRPC_CLIENT_ERROR_BUNDLE.messageClassNotSupported(className);
}
final TokenContract tContract = tContractBuilder
.canal(ExecutionContext.getChannelId())
.requestId(ExecutionContext.getTraceCode())
.traceId(ExecutionContext.getTraceId())
.username(ExecutionContext.getUserId())
.userType(ExecutionContext.getUserType())
.bodyHash(hash)
.requestType(RequestType.REQUEST)
.build();
try {
final TokenSpecification tSpecification = tValidation.buildToken(tContract);
Metadata.Key < String > apiKey = Metadata.Key.of(SecurityHeaders.API_KEY.value(), Metadata.ASCII_STRING_MARSHALLER);
Metadata.Key < String > tokenKey = Metadata.Key.of(SecurityHeaders.API_TOKEN.value(), Metadata.ASCII_STRING_MARSHALLER);
headers.put(apiKey, tSpecification.getKey());
headers.put(tokenKey, tSpecification.getToken());
if (CLASS_LOGGER.isDebugEnabled()) {
CLASS_LOGGER.debug("Iniciando invocacion gRPC -> {}", method.getFullMethodName());
CLASS_LOGGER.debug(" - Token Key: {}", tSpecification.getKey());
CLASS_LOGGER.debug(" - Token: {}", tSpecification.getToken());
}
} catch (TokenSecurityException tse) {
CLASS_LOGGER.error("Error en la generacion del token de llamada", tse);
throw GRPC_CLIENT_ERROR_BUNDLE.tokenGenerationException(tse);
}
super.start(responseListener, headers);
if (!started) {
super.request(numMessages);
started = true;
}
super.sendMessage(message);
}
};
}
} So this means that the Listener has to be instantiated per-channel and it can't be a "spring bean" autowired (or injected) into each possible channel. When you say that's a bad idea to delay the sending of the headers in the listener, what troubles/issues do you foresee? By broken you mean that the API/library could change and not work in the future or there is an execution path that could break the code? |
My concern is that you made assumptions on the grpc library's implementation details and might have hoped that's always the case after trials. For example, you seemed to assume that
|
Thanks @dapengzhang0 , I re-read the Javadoc regarding the "ClientCall" class. Seems that the current design for a streaming API implies that the Metadata is only sent once at the start of the stream, and not in each request... if this is the case, sending the JWT token as metadata would be useless and would limit us to only use the "Unary" call... The only other option I can think of right now is to actually "hardcode" a position in the proto contract to send the JWT in each message, and then expect to access this field by reflection... or something like that; what do you think? |
For streaming request, that is also the solution I can think of. Maybe another position for a random salt, because request message itself could be repeated frequently. |
Thanks @dapengzhang0, ill go with this idea. In our current implementation, each JWT is very short lived, about 2 seconds of TTL(max), and that's how we try to prevent replay, and there is also the hashing of the parameters... but a salt should also work in a no-params request, and adding a bit of security wont be a bad thing either, so thanks for the suggestion. |
What version of gRPC are you using?
1.13.2
What did you expect to see?
Currently in a project that I am working I need to send a hash of the parámeters sent to the server as part of a JWT token that is sent as Metadata (in our current REST API is sent as a HTTP Header).
The problem I am facing is that in a standard
ClientInterceptor
, and in aClientCall
, the Metadata is sent before calling thesendMessage
method, so I came up with this "solution" (I'm using spring):So finally the
sendMessage
does all three steps,start
,request
andsendMessage
, but I feel that this solution could break in a non blocking scenario (I am building some tests now).Could anyone give me some advice if this solution is OK considering the API desing of grpc-java or should I look into something else?
The text was updated successfully, but these errors were encountered: