Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Silence parser warning and patch version mismatch when loading parser/current #613

Merged
merged 5 commits into from
Feb 17, 2025

Conversation

navidemad
Copy link
Contributor

Description

This pull request adds a factory class RubyParserFactory to temporarily disable verbose mode when loading the "parser/current" library. The purpose is to suppress compatibility warnings that are displayed when the library is loaded with a different Ruby version than the one it recognizes.

Example warning for the release of Ruby 3.4.1 :

warning: parser/current is loading parser/ruby34, which recognizes
3.4.0-compliant syntax, but you are running 3.4.1.
Please see https://github.com/whitequark/parser#compatibility-with-ruby-mri.

By disabling verbose mode, these warnings are suppressed to provide a cleaner output and avoid confusion. The verbose mode is restored after the parser instance is created to maintain the original behavior.

References

Changes

  • Added a new module I18n::Tasks::Scanners to encapsulate the factory class.
  • Implemented the RubyParserFactory class with a create_parser method that temporarily disables verbose mode, loads the "parser/current" library, creates a new parser instance, and restores the verbose mode.
  • Added descriptive comments to explain the purpose of the module and provide an example of the warning being suppressed.

Benefits

  • Suppresses compatibility warnings when loading the "parser/current" library, providing a cleaner output for users.
  • Encapsulates the logic for creating a parser instance within a dedicated factory class, promoting code organization and reusability.
  • Maintains the original behavior of the Ruby environment by restoring the verbose mode after the parser instance is created.

Issues

Please let me know if you have any questions or feedback regarding this pull request.
Thank you for considering this contribution to the gem!

prev = $VERBOSE
$VERBOSE = nil
require "parser/current"
::Parser::CurrentRuby.new
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a much nicer solution to this would be to send a PR to whitequark/parser adding an argument to Parser::CurrentRuby.new that disables the warning

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made whitequark/parser#1065 we will see, but yeah you are right fixing it on the parser itself it's better, than hotfixing here.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They won't update their parser with Ruby >3.4 so we're stuck

Copy link
Collaborator

@davidwessman davidwessman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, interesting to see how this will play out with rubocop and the parsing ecosystem of Ruby.

@navidemad
Copy link
Contributor Author

navidemad commented Jan 25, 2025

I do not know why but with my PR

bundle exec i18n-tasks translate-missing --backend=openai 
bundler: failed to load command: i18n-tasks (/Users/dev/.rbenv/versions/3.4.1/lib/ruby/gems/3.4.0/bin/i18n-tasks)                                                                                                                           >  ETA: ??:??:?? 0/161 (0%)
/Users/dev/.rbenv/versions/3.4.1/lib/ruby/gems/3.4.0/gems/faraday-2.12.2/lib/faraday/response/raise_error.rb:30:in 'Faraday::Response::RaiseError#on_complete': the server responded with status 400 (Faraday::BadRequestError)

But works with the original repository, do you have an idea ?

I did a little bit by editing the OpenAi::Client to:

    def translator
      @translator ||= OpenAI::Client.new(access_token: api_key, log_errors: true) do |f|
        f.response :logger, Logger.new($stdout), bodies: true
      end
    end

I got:

INFO -- : response: {
  "error": {
    "message": "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": null
  }
}

@navidemad
Copy link
Contributor Author

I found the problem, the prompt has to have the word json in the prompt to be able to use response_format with json_object.
But i have no clue why it is happening on my repository fork and not on yours.

@davidwessman
Copy link
Collaborator

@navidemad please split out the OpenAI logging to a separate PR 🙂

@navidemad
Copy link
Contributor Author

@guillaumewrobel
Copy link

guillaumewrobel commented Feb 17, 2025

hey @davidwessman do you think this PR is ready to be merged?

@davidwessman
Copy link
Collaborator

@navidemad can you rebase and fix conflicts?

@navidemad
Copy link
Contributor Author

@davidwessman Done

@davidwessman
Copy link
Collaborator

@navidemad please fix the lint error

@navidemad
Copy link
Contributor Author

@navidemad please fix the lint error

Done

@davidwessman davidwessman merged commit f81bfba into glebm:main Feb 17, 2025
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants